Dec 05 09:00:32 localhost kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec 05 09:00:32 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 05 09:00:32 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 05 09:00:32 localhost kernel: BIOS-provided physical RAM map:
Dec 05 09:00:32 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 05 09:00:32 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 05 09:00:32 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 05 09:00:32 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 05 09:00:32 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 05 09:00:32 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 05 09:00:32 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 05 09:00:32 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 05 09:00:32 localhost kernel: NX (Execute Disable) protection: active
Dec 05 09:00:32 localhost kernel: APIC: Static calls initialized
Dec 05 09:00:32 localhost kernel: SMBIOS 2.8 present.
Dec 05 09:00:32 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 05 09:00:32 localhost kernel: Hypervisor detected: KVM
Dec 05 09:00:32 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 05 09:00:32 localhost kernel: kvm-clock: using sched offset of 3214597607 cycles
Dec 05 09:00:32 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 05 09:00:32 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 05 09:00:32 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 05 09:00:32 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 05 09:00:32 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 05 09:00:32 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 05 09:00:32 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 05 09:00:32 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 05 09:00:32 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 05 09:00:32 localhost kernel: Using GB pages for direct mapping
Dec 05 09:00:32 localhost kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec 05 09:00:32 localhost kernel: ACPI: Early table checksum verification disabled
Dec 05 09:00:32 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 05 09:00:32 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 09:00:32 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 09:00:32 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 09:00:32 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 05 09:00:32 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 09:00:32 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 09:00:32 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 05 09:00:32 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 05 09:00:32 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 05 09:00:32 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 05 09:00:32 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 05 09:00:32 localhost kernel: No NUMA configuration found
Dec 05 09:00:32 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 05 09:00:32 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 05 09:00:32 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 05 09:00:32 localhost kernel: Zone ranges:
Dec 05 09:00:32 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 05 09:00:32 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 05 09:00:32 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 05 09:00:32 localhost kernel:   Device   empty
Dec 05 09:00:32 localhost kernel: Movable zone start for each node
Dec 05 09:00:32 localhost kernel: Early memory node ranges
Dec 05 09:00:32 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 05 09:00:32 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 05 09:00:32 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 05 09:00:32 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 05 09:00:32 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 05 09:00:32 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 05 09:00:32 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 05 09:00:32 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 05 09:00:32 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 05 09:00:32 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 05 09:00:32 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 05 09:00:32 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 05 09:00:32 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 05 09:00:32 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 05 09:00:32 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 05 09:00:32 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 05 09:00:32 localhost kernel: TSC deadline timer available
Dec 05 09:00:32 localhost kernel: CPU topo: Max. logical packages:   8
Dec 05 09:00:32 localhost kernel: CPU topo: Max. logical dies:       8
Dec 05 09:00:32 localhost kernel: CPU topo: Max. dies per package:   1
Dec 05 09:00:32 localhost kernel: CPU topo: Max. threads per core:   1
Dec 05 09:00:32 localhost kernel: CPU topo: Num. cores per package:     1
Dec 05 09:00:32 localhost kernel: CPU topo: Num. threads per package:   1
Dec 05 09:00:32 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 05 09:00:32 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 05 09:00:32 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 05 09:00:32 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 05 09:00:32 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 05 09:00:32 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 05 09:00:32 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 05 09:00:32 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 05 09:00:32 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 05 09:00:32 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 05 09:00:32 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 05 09:00:32 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 05 09:00:32 localhost kernel: Booting paravirtualized kernel on KVM
Dec 05 09:00:32 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 05 09:00:32 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 05 09:00:32 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 05 09:00:32 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 05 09:00:32 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 05 09:00:32 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 05 09:00:32 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 05 09:00:32 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec 05 09:00:32 localhost kernel: random: crng init done
Dec 05 09:00:32 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 05 09:00:32 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 05 09:00:32 localhost kernel: Fallback order for Node 0: 0 
Dec 05 09:00:32 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 05 09:00:32 localhost kernel: Policy zone: Normal
Dec 05 09:00:32 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 05 09:00:32 localhost kernel: software IO TLB: area num 8.
Dec 05 09:00:32 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 05 09:00:32 localhost kernel: ftrace: allocating 49335 entries in 193 pages
Dec 05 09:00:32 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 05 09:00:32 localhost kernel: Dynamic Preempt: voluntary
Dec 05 09:00:32 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 05 09:00:32 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 05 09:00:32 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 05 09:00:32 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 05 09:00:32 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 05 09:00:32 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 05 09:00:32 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 05 09:00:32 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 05 09:00:32 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 05 09:00:32 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 05 09:00:32 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 05 09:00:32 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 05 09:00:32 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 05 09:00:32 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 05 09:00:32 localhost kernel: Console: colour VGA+ 80x25
Dec 05 09:00:32 localhost kernel: printk: console [ttyS0] enabled
Dec 05 09:00:32 localhost kernel: ACPI: Core revision 20230331
Dec 05 09:00:32 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 05 09:00:32 localhost kernel: x2apic enabled
Dec 05 09:00:32 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 05 09:00:32 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 05 09:00:32 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 05 09:00:32 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 05 09:00:32 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 05 09:00:32 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 05 09:00:32 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 05 09:00:32 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 05 09:00:32 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 05 09:00:32 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 05 09:00:32 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 05 09:00:32 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 05 09:00:32 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 05 09:00:32 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 05 09:00:32 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 05 09:00:32 localhost kernel: x86/bugs: return thunk changed
Dec 05 09:00:32 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 05 09:00:32 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 05 09:00:32 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 05 09:00:32 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 05 09:00:32 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 05 09:00:32 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 05 09:00:32 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 05 09:00:32 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 05 09:00:32 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 05 09:00:32 localhost kernel: landlock: Up and running.
Dec 05 09:00:32 localhost kernel: Yama: becoming mindful.
Dec 05 09:00:32 localhost kernel: SELinux:  Initializing.
Dec 05 09:00:32 localhost kernel: LSM support for eBPF active
Dec 05 09:00:32 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 05 09:00:32 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 05 09:00:32 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 05 09:00:32 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 05 09:00:32 localhost kernel: ... version:                0
Dec 05 09:00:32 localhost kernel: ... bit width:              48
Dec 05 09:00:32 localhost kernel: ... generic registers:      6
Dec 05 09:00:32 localhost kernel: ... value mask:             0000ffffffffffff
Dec 05 09:00:32 localhost kernel: ... max period:             00007fffffffffff
Dec 05 09:00:32 localhost kernel: ... fixed-purpose events:   0
Dec 05 09:00:32 localhost kernel: ... event mask:             000000000000003f
Dec 05 09:00:32 localhost kernel: signal: max sigframe size: 1776
Dec 05 09:00:32 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 05 09:00:32 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 05 09:00:32 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 05 09:00:32 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 05 09:00:32 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 05 09:00:32 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 05 09:00:32 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 05 09:00:32 localhost kernel: node 0 deferred pages initialised in 18ms
Dec 05 09:00:32 localhost kernel: Memory: 7763864K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618208K reserved, 0K cma-reserved)
Dec 05 09:00:32 localhost kernel: devtmpfs: initialized
Dec 05 09:00:32 localhost kernel: x86/mm: Memory block size: 128MB
Dec 05 09:00:32 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 05 09:00:32 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 05 09:00:32 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 05 09:00:32 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 05 09:00:32 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 05 09:00:32 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 05 09:00:32 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 05 09:00:32 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 05 09:00:32 localhost kernel: audit: type=2000 audit(1764925229.991:1): state=initialized audit_enabled=0 res=1
Dec 05 09:00:32 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 05 09:00:32 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 05 09:00:32 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 05 09:00:32 localhost kernel: cpuidle: using governor menu
Dec 05 09:00:32 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 05 09:00:32 localhost kernel: PCI: Using configuration type 1 for base access
Dec 05 09:00:32 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 05 09:00:32 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 05 09:00:32 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 05 09:00:32 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 05 09:00:32 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 05 09:00:32 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 05 09:00:32 localhost kernel: Demotion targets for Node 0: null
Dec 05 09:00:32 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 05 09:00:32 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 05 09:00:32 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 05 09:00:32 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 05 09:00:32 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 05 09:00:32 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 05 09:00:32 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 05 09:00:32 localhost kernel: ACPI: Interpreter enabled
Dec 05 09:00:32 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 05 09:00:32 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 05 09:00:32 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 05 09:00:32 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 05 09:00:32 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 05 09:00:32 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 05 09:00:32 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [3] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [4] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [5] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [6] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [7] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [8] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [9] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [10] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [11] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [12] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [13] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [14] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [15] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [16] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [17] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [18] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [19] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [20] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [21] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [22] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [23] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [24] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [25] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [26] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [27] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [28] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [29] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [30] registered
Dec 05 09:00:32 localhost kernel: acpiphp: Slot [31] registered
Dec 05 09:00:32 localhost kernel: PCI host bridge to bus 0000:00
Dec 05 09:00:32 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 05 09:00:32 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 05 09:00:32 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 05 09:00:32 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 05 09:00:32 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 05 09:00:32 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 05 09:00:32 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 05 09:00:32 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 05 09:00:32 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 05 09:00:32 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 05 09:00:32 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 05 09:00:32 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 05 09:00:32 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 05 09:00:32 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 05 09:00:32 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 05 09:00:32 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 05 09:00:32 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 05 09:00:32 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 05 09:00:32 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 05 09:00:32 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 05 09:00:32 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 05 09:00:32 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 05 09:00:32 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 05 09:00:32 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 05 09:00:32 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 05 09:00:32 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 05 09:00:32 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 05 09:00:32 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 05 09:00:32 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 05 09:00:32 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 05 09:00:32 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 05 09:00:32 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 05 09:00:32 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 05 09:00:32 localhost kernel: iommu: Default domain type: Translated
Dec 05 09:00:32 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 05 09:00:32 localhost kernel: SCSI subsystem initialized
Dec 05 09:00:32 localhost kernel: ACPI: bus type USB registered
Dec 05 09:00:32 localhost kernel: usbcore: registered new interface driver usbfs
Dec 05 09:00:32 localhost kernel: usbcore: registered new interface driver hub
Dec 05 09:00:32 localhost kernel: usbcore: registered new device driver usb
Dec 05 09:00:32 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 05 09:00:32 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 05 09:00:32 localhost kernel: PTP clock support registered
Dec 05 09:00:32 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 05 09:00:32 localhost kernel: NetLabel: Initializing
Dec 05 09:00:32 localhost kernel: NetLabel:  domain hash size = 128
Dec 05 09:00:32 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 05 09:00:32 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 05 09:00:32 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 05 09:00:32 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 05 09:00:32 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 05 09:00:32 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 05 09:00:32 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 05 09:00:32 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 05 09:00:32 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 05 09:00:32 localhost kernel: vgaarb: loaded
Dec 05 09:00:32 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 05 09:00:32 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 05 09:00:32 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 05 09:00:32 localhost kernel: pnp: PnP ACPI init
Dec 05 09:00:32 localhost kernel: pnp 00:03: [dma 2]
Dec 05 09:00:32 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 05 09:00:32 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 05 09:00:32 localhost kernel: NET: Registered PF_INET protocol family
Dec 05 09:00:32 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 05 09:00:32 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 05 09:00:32 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 05 09:00:32 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 05 09:00:32 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 05 09:00:32 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 05 09:00:32 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 05 09:00:32 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 05 09:00:32 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 05 09:00:32 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 05 09:00:32 localhost kernel: NET: Registered PF_XDP protocol family
Dec 05 09:00:32 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 05 09:00:32 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 05 09:00:32 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 05 09:00:32 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 05 09:00:32 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 05 09:00:32 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 05 09:00:32 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 05 09:00:32 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 75367 usecs
Dec 05 09:00:32 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 05 09:00:32 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 05 09:00:32 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 05 09:00:32 localhost kernel: ACPI: bus type thunderbolt registered
Dec 05 09:00:32 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 05 09:00:32 localhost kernel: Initialise system trusted keyrings
Dec 05 09:00:32 localhost kernel: Key type blacklist registered
Dec 05 09:00:32 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 05 09:00:32 localhost kernel: zbud: loaded
Dec 05 09:00:32 localhost kernel: integrity: Platform Keyring initialized
Dec 05 09:00:32 localhost kernel: integrity: Machine keyring initialized
Dec 05 09:00:32 localhost kernel: Freeing initrd memory: 87804K
Dec 05 09:00:32 localhost kernel: NET: Registered PF_ALG protocol family
Dec 05 09:00:32 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 05 09:00:32 localhost kernel: Key type asymmetric registered
Dec 05 09:00:32 localhost kernel: Asymmetric key parser 'x509' registered
Dec 05 09:00:32 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 05 09:00:32 localhost kernel: io scheduler mq-deadline registered
Dec 05 09:00:32 localhost kernel: io scheduler kyber registered
Dec 05 09:00:32 localhost kernel: io scheduler bfq registered
Dec 05 09:00:32 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 05 09:00:32 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 05 09:00:32 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 05 09:00:32 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 05 09:00:32 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 05 09:00:32 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 05 09:00:32 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 05 09:00:32 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 05 09:00:32 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 05 09:00:32 localhost kernel: Non-volatile memory driver v1.3
Dec 05 09:00:32 localhost kernel: rdac: device handler registered
Dec 05 09:00:32 localhost kernel: hp_sw: device handler registered
Dec 05 09:00:32 localhost kernel: emc: device handler registered
Dec 05 09:00:32 localhost kernel: alua: device handler registered
Dec 05 09:00:32 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 05 09:00:32 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 05 09:00:32 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 05 09:00:32 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 05 09:00:32 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 05 09:00:32 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 05 09:00:32 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 05 09:00:32 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec 05 09:00:32 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 05 09:00:32 localhost kernel: hub 1-0:1.0: USB hub found
Dec 05 09:00:32 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 05 09:00:32 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 05 09:00:32 localhost kernel: usbserial: USB Serial support registered for generic
Dec 05 09:00:32 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 05 09:00:32 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 05 09:00:32 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 05 09:00:32 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 05 09:00:32 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 05 09:00:32 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 05 09:00:32 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 05 09:00:32 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 05 09:00:32 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-05T09:00:31 UTC (1764925231)
Dec 05 09:00:32 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 05 09:00:32 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 05 09:00:32 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 05 09:00:32 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 05 09:00:32 localhost kernel: usbcore: registered new interface driver usbhid
Dec 05 09:00:32 localhost kernel: usbhid: USB HID core driver
Dec 05 09:00:32 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 05 09:00:32 localhost kernel: Initializing XFRM netlink socket
Dec 05 09:00:32 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 05 09:00:32 localhost kernel: Segment Routing with IPv6
Dec 05 09:00:32 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 05 09:00:32 localhost kernel: mpls_gso: MPLS GSO support
Dec 05 09:00:32 localhost kernel: IPI shorthand broadcast: enabled
Dec 05 09:00:32 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 05 09:00:32 localhost kernel: AES CTR mode by8 optimization enabled
Dec 05 09:00:32 localhost kernel: sched_clock: Marking stable (1627004708, 153470865)->(1917408665, -136933092)
Dec 05 09:00:32 localhost kernel: registered taskstats version 1
Dec 05 09:00:32 localhost kernel: Loading compiled-in X.509 certificates
Dec 05 09:00:32 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 05 09:00:32 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 05 09:00:32 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 05 09:00:32 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 05 09:00:32 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 05 09:00:32 localhost kernel: Demotion targets for Node 0: null
Dec 05 09:00:32 localhost kernel: page_owner is disabled
Dec 05 09:00:32 localhost kernel: Key type .fscrypt registered
Dec 05 09:00:32 localhost kernel: Key type fscrypt-provisioning registered
Dec 05 09:00:32 localhost kernel: Key type big_key registered
Dec 05 09:00:32 localhost kernel: Key type encrypted registered
Dec 05 09:00:32 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 05 09:00:32 localhost kernel: Loading compiled-in module X.509 certificates
Dec 05 09:00:32 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 05 09:00:32 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 05 09:00:32 localhost kernel: ima: No architecture policies found
Dec 05 09:00:32 localhost kernel: evm: Initialising EVM extended attributes:
Dec 05 09:00:32 localhost kernel: evm: security.selinux
Dec 05 09:00:32 localhost kernel: evm: security.SMACK64 (disabled)
Dec 05 09:00:32 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 05 09:00:32 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 05 09:00:32 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 05 09:00:32 localhost kernel: evm: security.apparmor (disabled)
Dec 05 09:00:32 localhost kernel: evm: security.ima
Dec 05 09:00:32 localhost kernel: evm: security.capability
Dec 05 09:00:32 localhost kernel: evm: HMAC attrs: 0x1
Dec 05 09:00:32 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 05 09:00:32 localhost kernel: Running certificate verification RSA selftest
Dec 05 09:00:32 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 05 09:00:32 localhost kernel: Running certificate verification ECDSA selftest
Dec 05 09:00:32 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 05 09:00:32 localhost kernel: clk: Disabling unused clocks
Dec 05 09:00:32 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 05 09:00:32 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec 05 09:00:32 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 05 09:00:32 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec 05 09:00:32 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 05 09:00:32 localhost kernel: Run /init as init process
Dec 05 09:00:32 localhost kernel:   with arguments:
Dec 05 09:00:32 localhost kernel:     /init
Dec 05 09:00:32 localhost kernel:   with environment:
Dec 05 09:00:32 localhost kernel:     HOME=/
Dec 05 09:00:32 localhost kernel:     TERM=linux
Dec 05 09:00:32 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64
Dec 05 09:00:32 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 05 09:00:32 localhost systemd[1]: Detected virtualization kvm.
Dec 05 09:00:32 localhost systemd[1]: Detected architecture x86-64.
Dec 05 09:00:32 localhost systemd[1]: Running in initrd.
Dec 05 09:00:32 localhost systemd[1]: No hostname configured, using default hostname.
Dec 05 09:00:32 localhost systemd[1]: Hostname set to <localhost>.
Dec 05 09:00:32 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 05 09:00:32 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 05 09:00:32 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 05 09:00:32 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 05 09:00:32 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 05 09:00:32 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 05 09:00:32 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 05 09:00:32 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 05 09:00:32 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 05 09:00:32 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 05 09:00:32 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 05 09:00:32 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 05 09:00:32 localhost systemd[1]: Reached target Local File Systems.
Dec 05 09:00:32 localhost systemd[1]: Reached target Path Units.
Dec 05 09:00:32 localhost systemd[1]: Reached target Slice Units.
Dec 05 09:00:32 localhost systemd[1]: Reached target Swaps.
Dec 05 09:00:32 localhost systemd[1]: Reached target Timer Units.
Dec 05 09:00:32 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 05 09:00:32 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 05 09:00:32 localhost systemd[1]: Listening on Journal Socket.
Dec 05 09:00:32 localhost systemd[1]: Listening on udev Control Socket.
Dec 05 09:00:32 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 05 09:00:32 localhost systemd[1]: Reached target Socket Units.
Dec 05 09:00:32 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 05 09:00:32 localhost systemd[1]: Starting Journal Service...
Dec 05 09:00:32 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 05 09:00:32 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 05 09:00:32 localhost systemd[1]: Starting Create System Users...
Dec 05 09:00:32 localhost systemd[1]: Starting Setup Virtual Console...
Dec 05 09:00:32 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 05 09:00:32 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 05 09:00:32 localhost systemd[1]: Finished Create System Users.
Dec 05 09:00:32 localhost systemd-journald[307]: Journal started
Dec 05 09:00:32 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/f275b88f2c9947a9a7475d8960473fbf) is 8.0M, max 153.6M, 145.6M free.
Dec 05 09:00:32 localhost systemd-sysusers[312]: Creating group 'users' with GID 100.
Dec 05 09:00:32 localhost systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Dec 05 09:00:32 localhost systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 05 09:00:32 localhost systemd[1]: Started Journal Service.
Dec 05 09:00:32 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 05 09:00:32 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 05 09:00:32 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 05 09:00:32 localhost systemd[1]: Finished Setup Virtual Console.
Dec 05 09:00:32 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 05 09:00:32 localhost systemd[1]: Starting dracut cmdline hook...
Dec 05 09:00:32 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 05 09:00:32 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Dec 05 09:00:32 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 05 09:00:32 localhost systemd[1]: Finished dracut cmdline hook.
Dec 05 09:00:32 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 05 09:00:32 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 05 09:00:32 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 05 09:00:32 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 05 09:00:32 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 05 09:00:32 localhost kernel: RPC: Registered udp transport module.
Dec 05 09:00:32 localhost kernel: RPC: Registered tcp transport module.
Dec 05 09:00:32 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 05 09:00:32 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 05 09:00:32 localhost rpc.statd[443]: Version 2.5.4 starting
Dec 05 09:00:32 localhost rpc.statd[443]: Initializing NSM state
Dec 05 09:00:32 localhost rpc.idmapd[448]: Setting log level to 0
Dec 05 09:00:32 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 05 09:00:32 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 05 09:00:32 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Dec 05 09:00:32 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 05 09:00:32 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 05 09:00:32 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 05 09:00:32 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 05 09:00:32 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 05 09:00:32 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 05 09:00:32 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 05 09:00:32 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 05 09:00:32 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 05 09:00:32 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 05 09:00:32 localhost systemd[1]: Reached target Network.
Dec 05 09:00:32 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 05 09:00:32 localhost systemd[1]: Starting dracut initqueue hook...
Dec 05 09:00:32 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 05 09:00:32 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 05 09:00:32 localhost kernel:  vda: vda1
Dec 05 09:00:33 localhost kernel: libata version 3.00 loaded.
Dec 05 09:00:33 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 05 09:00:33 localhost kernel: scsi host0: ata_piix
Dec 05 09:00:33 localhost kernel: scsi host1: ata_piix
Dec 05 09:00:33 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 05 09:00:33 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 05 09:00:33 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 05 09:00:33 localhost systemd[1]: Reached target Initrd Root Device.
Dec 05 09:00:33 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 05 09:00:33 localhost kernel: ata1: found unknown device (class 0)
Dec 05 09:00:33 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 05 09:00:33 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 05 09:00:33 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 05 09:00:33 localhost systemd[1]: Reached target System Initialization.
Dec 05 09:00:33 localhost systemd[1]: Reached target Basic System.
Dec 05 09:00:33 localhost systemd-udevd[472]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 09:00:33 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 05 09:00:33 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 05 09:00:33 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 05 09:00:33 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 05 09:00:33 localhost systemd[1]: Finished dracut initqueue hook.
Dec 05 09:00:33 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 05 09:00:33 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 05 09:00:33 localhost systemd[1]: Reached target Remote File Systems.
Dec 05 09:00:33 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 05 09:00:33 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 05 09:00:33 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 05 09:00:33 localhost systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Dec 05 09:00:33 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 05 09:00:33 localhost systemd[1]: Mounting /sysroot...
Dec 05 09:00:33 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 05 09:00:33 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 05 09:00:33 localhost kernel: XFS (vda1): Ending clean mount
Dec 05 09:00:33 localhost systemd[1]: Mounted /sysroot.
Dec 05 09:00:33 localhost systemd[1]: Reached target Initrd Root File System.
Dec 05 09:00:33 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 05 09:00:34 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 05 09:00:34 localhost systemd[1]: Reached target Initrd File Systems.
Dec 05 09:00:34 localhost systemd[1]: Reached target Initrd Default Target.
Dec 05 09:00:34 localhost systemd[1]: Starting dracut mount hook...
Dec 05 09:00:34 localhost systemd[1]: Finished dracut mount hook.
Dec 05 09:00:34 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 05 09:00:34 localhost rpc.idmapd[448]: exiting on signal 15
Dec 05 09:00:34 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 05 09:00:34 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 05 09:00:34 localhost systemd[1]: Stopped target Network.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Timer Units.
Dec 05 09:00:34 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 05 09:00:34 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Basic System.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Path Units.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Remote File Systems.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Slice Units.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Socket Units.
Dec 05 09:00:34 localhost systemd[1]: Stopped target System Initialization.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Local File Systems.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Swaps.
Dec 05 09:00:34 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped dracut mount hook.
Dec 05 09:00:34 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 05 09:00:34 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 05 09:00:34 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 05 09:00:34 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 05 09:00:34 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 05 09:00:34 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 05 09:00:34 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 05 09:00:34 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 05 09:00:34 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 05 09:00:34 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 05 09:00:34 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Closed udev Control Socket.
Dec 05 09:00:34 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Closed udev Kernel Socket.
Dec 05 09:00:34 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 05 09:00:34 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 05 09:00:34 localhost systemd[1]: Starting Cleanup udev Database...
Dec 05 09:00:34 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 05 09:00:34 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 05 09:00:34 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped Create System Users.
Dec 05 09:00:34 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 05 09:00:34 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Finished Cleanup udev Database.
Dec 05 09:00:34 localhost systemd[1]: Reached target Switch Root.
Dec 05 09:00:34 localhost systemd[1]: Starting Switch Root...
Dec 05 09:00:34 localhost systemd[1]: Switching root.
Dec 05 09:00:34 localhost systemd-journald[307]: Journal stopped
Dec 05 09:00:34 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Dec 05 09:00:34 localhost kernel: audit: type=1404 audit(1764925234.376:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 05 09:00:34 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:00:34 localhost kernel: SELinux:  policy capability open_perms=1
Dec 05 09:00:34 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:00:34 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:00:34 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:00:34 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:00:34 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:00:34 localhost kernel: audit: type=1403 audit(1764925234.497:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 05 09:00:34 localhost systemd[1]: Successfully loaded SELinux policy in 123.795ms.
Dec 05 09:00:34 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.259ms.
Dec 05 09:00:34 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 05 09:00:34 localhost systemd[1]: Detected virtualization kvm.
Dec 05 09:00:34 localhost systemd[1]: Detected architecture x86-64.
Dec 05 09:00:34 localhost systemd-rc-local-generator[639]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:00:34 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped Switch Root.
Dec 05 09:00:34 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 05 09:00:34 localhost systemd[1]: Created slice Slice /system/getty.
Dec 05 09:00:34 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 05 09:00:34 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 05 09:00:34 localhost systemd[1]: Created slice User and Session Slice.
Dec 05 09:00:34 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 05 09:00:34 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 05 09:00:34 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 05 09:00:34 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Switch Root.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 05 09:00:34 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 05 09:00:34 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 05 09:00:34 localhost systemd[1]: Reached target Path Units.
Dec 05 09:00:34 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 05 09:00:34 localhost systemd[1]: Reached target Slice Units.
Dec 05 09:00:34 localhost systemd[1]: Reached target Swaps.
Dec 05 09:00:34 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 05 09:00:34 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 05 09:00:34 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 05 09:00:34 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 05 09:00:34 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 05 09:00:34 localhost systemd[1]: Listening on udev Control Socket.
Dec 05 09:00:34 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 05 09:00:34 localhost systemd[1]: Mounting Huge Pages File System...
Dec 05 09:00:34 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 05 09:00:34 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 05 09:00:34 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 05 09:00:34 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 05 09:00:34 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 05 09:00:34 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 05 09:00:34 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 05 09:00:34 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 05 09:00:34 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 05 09:00:34 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 05 09:00:34 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 05 09:00:34 localhost systemd[1]: Stopped Journal Service.
Dec 05 09:00:34 localhost systemd[1]: Starting Journal Service...
Dec 05 09:00:34 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 05 09:00:34 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 05 09:00:34 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 05 09:00:34 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 05 09:00:34 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 05 09:00:34 localhost systemd-journald[680]: Journal started
Dec 05 09:00:34 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 05 09:00:34 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 05 09:00:34 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 05 09:00:34 localhost kernel: fuse: init (API version 7.37)
Dec 05 09:00:34 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 05 09:00:34 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 05 09:00:34 localhost systemd[1]: Started Journal Service.
Dec 05 09:00:34 localhost systemd[1]: Mounted Huge Pages File System.
Dec 05 09:00:34 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 05 09:00:34 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 05 09:00:34 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 05 09:00:34 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 05 09:00:34 localhost kernel: ACPI: bus type drm_connector registered
Dec 05 09:00:34 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 05 09:00:34 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 05 09:00:34 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 05 09:00:34 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 05 09:00:34 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 05 09:00:34 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 05 09:00:34 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 05 09:00:34 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 05 09:00:34 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 05 09:00:34 localhost systemd[1]: Mounting FUSE Control File System...
Dec 05 09:00:34 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 05 09:00:34 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 05 09:00:34 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 05 09:00:34 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 05 09:00:34 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 05 09:00:34 localhost systemd[1]: Starting Create System Users...
Dec 05 09:00:34 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 05 09:00:34 localhost systemd-journald[680]: Received client request to flush runtime journal.
Dec 05 09:00:34 localhost systemd[1]: Mounted FUSE Control File System.
Dec 05 09:00:34 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 05 09:00:34 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 05 09:00:34 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 05 09:00:35 localhost systemd[1]: Finished Create System Users.
Dec 05 09:00:35 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 05 09:00:35 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 05 09:00:35 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 05 09:00:35 localhost systemd[1]: Reached target Local File Systems.
Dec 05 09:00:35 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 05 09:00:35 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 05 09:00:35 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 05 09:00:35 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 05 09:00:35 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 05 09:00:35 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 05 09:00:35 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 05 09:00:35 localhost bootctl[696]: Couldn't find EFI system partition, skipping.
Dec 05 09:00:35 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 05 09:00:35 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 05 09:00:35 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 05 09:00:35 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 05 09:00:35 localhost systemd[1]: Starting Security Auditing Service...
Dec 05 09:00:35 localhost systemd[1]: Starting RPC Bind...
Dec 05 09:00:35 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 05 09:00:35 localhost auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 05 09:00:35 localhost auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 05 09:00:35 localhost systemd[1]: Started RPC Bind.
Dec 05 09:00:35 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 05 09:00:35 localhost augenrules[707]: /sbin/augenrules: No change
Dec 05 09:00:35 localhost augenrules[722]: No rules
Dec 05 09:00:35 localhost augenrules[722]: enabled 1
Dec 05 09:00:35 localhost augenrules[722]: failure 1
Dec 05 09:00:35 localhost augenrules[722]: pid 702
Dec 05 09:00:35 localhost augenrules[722]: rate_limit 0
Dec 05 09:00:35 localhost augenrules[722]: backlog_limit 8192
Dec 05 09:00:35 localhost augenrules[722]: lost 0
Dec 05 09:00:35 localhost augenrules[722]: backlog 3
Dec 05 09:00:35 localhost augenrules[722]: backlog_wait_time 60000
Dec 05 09:00:35 localhost augenrules[722]: backlog_wait_time_actual 0
Dec 05 09:00:35 localhost augenrules[722]: enabled 1
Dec 05 09:00:35 localhost augenrules[722]: failure 1
Dec 05 09:00:35 localhost augenrules[722]: pid 702
Dec 05 09:00:35 localhost augenrules[722]: rate_limit 0
Dec 05 09:00:35 localhost augenrules[722]: backlog_limit 8192
Dec 05 09:00:35 localhost augenrules[722]: lost 0
Dec 05 09:00:35 localhost augenrules[722]: backlog 3
Dec 05 09:00:35 localhost augenrules[722]: backlog_wait_time 60000
Dec 05 09:00:35 localhost augenrules[722]: backlog_wait_time_actual 0
Dec 05 09:00:35 localhost augenrules[722]: enabled 1
Dec 05 09:00:35 localhost augenrules[722]: failure 1
Dec 05 09:00:35 localhost augenrules[722]: pid 702
Dec 05 09:00:35 localhost augenrules[722]: rate_limit 0
Dec 05 09:00:35 localhost augenrules[722]: backlog_limit 8192
Dec 05 09:00:35 localhost augenrules[722]: lost 0
Dec 05 09:00:35 localhost augenrules[722]: backlog 3
Dec 05 09:00:35 localhost augenrules[722]: backlog_wait_time 60000
Dec 05 09:00:35 localhost augenrules[722]: backlog_wait_time_actual 0
Dec 05 09:00:35 localhost systemd[1]: Started Security Auditing Service.
Dec 05 09:00:35 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 05 09:00:35 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 05 09:00:35 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 05 09:00:35 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 05 09:00:35 localhost systemd[1]: Starting Update is Completed...
Dec 05 09:00:35 localhost systemd[1]: Finished Update is Completed.
Dec 05 09:00:35 localhost systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Dec 05 09:00:35 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 05 09:00:35 localhost systemd[1]: Reached target System Initialization.
Dec 05 09:00:35 localhost systemd[1]: Started dnf makecache --timer.
Dec 05 09:00:35 localhost systemd[1]: Started Daily rotation of log files.
Dec 05 09:00:35 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 05 09:00:35 localhost systemd[1]: Reached target Timer Units.
Dec 05 09:00:35 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 05 09:00:35 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 05 09:00:35 localhost systemd[1]: Reached target Socket Units.
Dec 05 09:00:35 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 05 09:00:35 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 05 09:00:35 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 05 09:00:35 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 05 09:00:35 localhost systemd-udevd[741]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 09:00:35 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 05 09:00:35 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 05 09:00:35 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 05 09:00:35 localhost systemd[1]: Reached target Basic System.
Dec 05 09:00:35 localhost dbus-broker-lau[761]: Ready
Dec 05 09:00:35 localhost systemd[1]: Starting NTP client/server...
Dec 05 09:00:35 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 05 09:00:35 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 05 09:00:35 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 05 09:00:35 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 05 09:00:35 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 05 09:00:35 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 05 09:00:35 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 05 09:00:35 localhost systemd[1]: Started irqbalance daemon.
Dec 05 09:00:35 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 05 09:00:35 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:00:35 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:00:35 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:00:35 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 05 09:00:35 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 05 09:00:35 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 05 09:00:35 localhost systemd[1]: Starting User Login Management...
Dec 05 09:00:35 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 05 09:00:35 localhost chronyd[793]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 05 09:00:35 localhost chronyd[793]: Loaded 0 symmetric keys
Dec 05 09:00:35 localhost chronyd[793]: Using right/UTC timezone to obtain leap second data
Dec 05 09:00:35 localhost chronyd[793]: Loaded seccomp filter (level 2)
Dec 05 09:00:35 localhost systemd[1]: Started NTP client/server.
Dec 05 09:00:35 localhost systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 05 09:00:35 localhost systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 05 09:00:35 localhost systemd-logind[789]: New seat seat0.
Dec 05 09:00:35 localhost systemd[1]: Started User Login Management.
Dec 05 09:00:35 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 05 09:00:35 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 05 09:00:35 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 05 09:00:35 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 05 09:00:35 localhost kernel: Console: switching to colour dummy device 80x25
Dec 05 09:00:35 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 05 09:00:35 localhost kernel: [drm] features: -context_init
Dec 05 09:00:36 localhost kernel: [drm] number of scanouts: 1
Dec 05 09:00:36 localhost kernel: [drm] number of cap sets: 0
Dec 05 09:00:36 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 05 09:00:36 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 05 09:00:36 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 05 09:00:36 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 05 09:00:36 localhost kernel: kvm_amd: TSC scaling supported
Dec 05 09:00:36 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 05 09:00:36 localhost kernel: kvm_amd: Nested Paging enabled
Dec 05 09:00:36 localhost kernel: kvm_amd: LBR virtualization supported
Dec 05 09:00:36 localhost iptables.init[783]: iptables: Applying firewall rules: [  OK  ]
Dec 05 09:00:36 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 05 09:00:36 localhost cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Fri, 05 Dec 2025 09:00:36 +0000. Up 6.30 seconds.
Dec 05 09:00:36 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 05 09:00:36 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 05 09:00:36 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpjzmp8nvx.mount: Deactivated successfully.
Dec 05 09:00:36 localhost systemd[1]: Starting Hostname Service...
Dec 05 09:00:36 localhost systemd[1]: Started Hostname Service.
Dec 05 09:00:36 np0005546606.novalocal systemd-hostnamed[854]: Hostname set to <np0005546606.novalocal> (static)
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Reached target Preparation for Network.
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Starting Network Manager...
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7467] NetworkManager (version 1.54.1-1.el9) is starting... (boot:77fa800c-2983-4f5e-b315-57495a3fe27a)
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7471] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7536] manager[0x560c7ccc0080]: monitoring kernel firmware directory '/lib/firmware'.
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7583] hostname: hostname: using hostnamed
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7583] hostname: static hostname changed from (none) to "np0005546606.novalocal"
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7589] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7723] manager[0x560c7ccc0080]: rfkill: Wi-Fi hardware radio set enabled
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7724] manager[0x560c7ccc0080]: rfkill: WWAN hardware radio set enabled
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7760] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7760] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7760] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7761] manager: Networking is enabled by state file
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7762] settings: Loaded settings plugin: keyfile (internal)
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7774] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7790] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7800] dhcp: init: Using DHCP client 'internal'
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7802] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7813] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7819] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7825] device (lo): Activation: starting connection 'lo' (f6d82822-cadb-414c-ae68-8f6717460373)
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7833] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7835] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7864] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7867] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7869] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7870] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7872] device (eth0): carrier: link connected
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7875] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7879] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7884] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7888] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7888] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7891] manager: NetworkManager state is now CONNECTING
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7892] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7901] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Started Network Manager.
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.7904] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Reached target Network.
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.8054] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.8056] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 05 09:00:36 np0005546606.novalocal NetworkManager[858]: <info>  [1764925236.8061] device (lo): Activation: successful, device activated.
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Reached target NFS client services.
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: Reached target Remote File Systems.
Dec 05 09:00:36 np0005546606.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 05 09:00:37 np0005546606.novalocal NetworkManager[858]: <info>  [1764925237.7414] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Dec 05 09:00:37 np0005546606.novalocal NetworkManager[858]: <info>  [1764925237.7428] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 05 09:00:37 np0005546606.novalocal NetworkManager[858]: <info>  [1764925237.7447] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:00:37 np0005546606.novalocal NetworkManager[858]: <info>  [1764925237.7471] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:00:37 np0005546606.novalocal NetworkManager[858]: <info>  [1764925237.7472] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:00:37 np0005546606.novalocal NetworkManager[858]: <info>  [1764925237.7475] manager: NetworkManager state is now CONNECTED_SITE
Dec 05 09:00:37 np0005546606.novalocal NetworkManager[858]: <info>  [1764925237.7477] device (eth0): Activation: successful, device activated.
Dec 05 09:00:37 np0005546606.novalocal NetworkManager[858]: <info>  [1764925237.7480] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 05 09:00:37 np0005546606.novalocal NetworkManager[858]: <info>  [1764925237.7482] manager: startup complete
Dec 05 09:00:37 np0005546606.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 05 09:00:37 np0005546606.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Fri, 05 Dec 2025 09:00:38 +0000. Up 8.13 seconds.
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.129.56.228         | 255.255.255.0 | global | fa:16:3e:6a:63:46 |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe6a:6346/64 |       .       |  link  | fa:16:3e:6a:63:46 |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 05 09:00:38 np0005546606.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 05 09:00:38 np0005546606.novalocal useradd[987]: new group: name=cloud-user, GID=1001
Dec 05 09:00:38 np0005546606.novalocal useradd[987]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 05 09:00:38 np0005546606.novalocal useradd[987]: add 'cloud-user' to group 'adm'
Dec 05 09:00:38 np0005546606.novalocal useradd[987]: add 'cloud-user' to group 'systemd-journal'
Dec 05 09:00:38 np0005546606.novalocal useradd[987]: add 'cloud-user' to shadow group 'adm'
Dec 05 09:00:38 np0005546606.novalocal useradd[987]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: Generating public/private rsa key pair.
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: The key fingerprint is:
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: SHA256:RshVkPt7D5c6zD8u2qcQeRM6kvH9GCz/R/NcuacqMAU root@np0005546606.novalocal
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: The key's randomart image is:
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: +---[RSA 3072]----+
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |        o+.      |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |     . oE        |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |      o oo  .    |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |       ..+.= .   |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |        SoB *   .|
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |       .o..* = +o|
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |         o.++ +o=|
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |          oo*=o.=|
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |          .+=X*+.|
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: The key fingerprint is:
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: SHA256:CkjYV4XcgREDaZgCD9aSPCxXcDGoqbf9o/80SGEmiXs root@np0005546606.novalocal
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: The key's randomart image is:
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: +---[ECDSA 256]---+
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |*.+**==Bo.       |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |+@=+o++..        |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |o=B.+ +          |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |o. + + .         |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |. o E . S        |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |. .. o o         |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: | . o  o o        |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |  . . .. .       |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |    .+oo.        |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: The key fingerprint is:
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: SHA256:cGmTyoK2/oEKlYkJLaIBASyiav31gw5bHCqoEMYlKiI root@np0005546606.novalocal
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: The key's randomart image is:
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: +--[ED25519 256]--+
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |*.               |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |+o       o       |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |O...  . *        |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |B=o+ . = .       |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |EoB . o.S        |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |*=.+ .o..        |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |+.o.oo.oo        |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |+o  .o+. o       |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: |o ......  .      |
Dec 05 09:00:39 np0005546606.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Reached target Network is Online.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Starting System Logging Service...
Dec 05 09:00:39 np0005546606.novalocal sm-notify[1003]: Version 2.5.4 starting
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Starting Permit User Sessions...
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 05 09:00:39 np0005546606.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Dec 05 09:00:39 np0005546606.novalocal sshd[1005]: Server listening on :: port 22.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Finished Permit User Sessions.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Started Command Scheduler.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Started Getty on tty1.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 05 09:00:39 np0005546606.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Dec 05 09:00:39 np0005546606.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Reached target Login Prompts.
Dec 05 09:00:39 np0005546606.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 40% if used.)
Dec 05 09:00:39 np0005546606.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Dec 05 09:00:39 np0005546606.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Dec 05 09:00:39 np0005546606.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Started System Logging Service.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Reached target Multi-User System.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 05 09:00:39 np0005546606.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:00:39 np0005546606.novalocal kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Dec 05 09:00:39 np0005546606.novalocal kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec 05 09:00:39 np0005546606.novalocal cloud-init[1145]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Fri, 05 Dec 2025 09:00:39 +0000. Up 9.54 seconds.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 05 09:00:39 np0005546606.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 05 09:00:39 np0005546606.novalocal dracut[1264]: dracut-057-102.git20250818.el9
Dec 05 09:00:39 np0005546606.novalocal sshd-session[1265]: Connection reset by 38.102.83.114 port 58094 [preauth]
Dec 05 09:00:39 np0005546606.novalocal sshd-session[1280]: Unable to negotiate with 38.102.83.114 port 58098: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 05 09:00:39 np0005546606.novalocal sshd-session[1286]: Unable to negotiate with 38.102.83.114 port 58106: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 05 09:00:39 np0005546606.novalocal sshd-session[1288]: Unable to negotiate with 38.102.83.114 port 58120: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 05 09:00:39 np0005546606.novalocal sshd-session[1290]: Connection closed by 38.102.83.114 port 58128 [preauth]
Dec 05 09:00:39 np0005546606.novalocal sshd-session[1294]: Connection reset by 38.102.83.114 port 58134 [preauth]
Dec 05 09:00:39 np0005546606.novalocal sshd-session[1296]: Unable to negotiate with 38.102.83.114 port 58140: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 05 09:00:39 np0005546606.novalocal sshd-session[1302]: Unable to negotiate with 38.102.83.114 port 58142: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 05 09:00:39 np0005546606.novalocal dracut[1267]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec 05 09:00:39 np0005546606.novalocal sshd-session[1284]: Connection closed by 38.102.83.114 port 58100 [preauth]
Dec 05 09:00:39 np0005546606.novalocal cloud-init[1317]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Fri, 05 Dec 2025 09:00:39 +0000. Up 9.97 seconds.
Dec 05 09:00:39 np0005546606.novalocal cloud-init[1354]: #############################################################
Dec 05 09:00:39 np0005546606.novalocal cloud-init[1355]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 05 09:00:39 np0005546606.novalocal cloud-init[1357]: 256 SHA256:CkjYV4XcgREDaZgCD9aSPCxXcDGoqbf9o/80SGEmiXs root@np0005546606.novalocal (ECDSA)
Dec 05 09:00:40 np0005546606.novalocal cloud-init[1362]: 256 SHA256:cGmTyoK2/oEKlYkJLaIBASyiav31gw5bHCqoEMYlKiI root@np0005546606.novalocal (ED25519)
Dec 05 09:00:40 np0005546606.novalocal cloud-init[1366]: 3072 SHA256:RshVkPt7D5c6zD8u2qcQeRM6kvH9GCz/R/NcuacqMAU root@np0005546606.novalocal (RSA)
Dec 05 09:00:40 np0005546606.novalocal cloud-init[1368]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 05 09:00:40 np0005546606.novalocal cloud-init[1369]: #############################################################
Dec 05 09:00:40 np0005546606.novalocal cloud-init[1317]: Cloud-init v. 24.4-7.el9 finished at Fri, 05 Dec 2025 09:00:40 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.14 seconds
Dec 05 09:00:40 np0005546606.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 05 09:00:40 np0005546606.novalocal systemd[1]: Reached target Cloud-init target.
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: memstrack is not available
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 05 09:00:40 np0005546606.novalocal dracut[1267]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: memstrack is not available
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: *** Including module: systemd ***
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: *** Including module: fips ***
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: *** Including module: systemd-initrd ***
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: *** Including module: i18n ***
Dec 05 09:00:41 np0005546606.novalocal dracut[1267]: *** Including module: drm ***
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]: *** Including module: prefixdevname ***
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]: *** Including module: kernel-modules ***
Dec 05 09:00:42 np0005546606.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 05 09:00:42 np0005546606.novalocal chronyd[793]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Dec 05 09:00:42 np0005546606.novalocal chronyd[793]: System clock TAI offset set to 37 seconds
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]: *** Including module: kernel-modules-extra ***
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]: *** Including module: qemu ***
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]: *** Including module: fstab-sys ***
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]: *** Including module: rootfs-block ***
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]: *** Including module: terminfo ***
Dec 05 09:00:42 np0005546606.novalocal dracut[1267]: *** Including module: udev-rules ***
Dec 05 09:00:43 np0005546606.novalocal dracut[1267]: Skipping udev rule: 91-permissions.rules
Dec 05 09:00:43 np0005546606.novalocal dracut[1267]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 05 09:00:43 np0005546606.novalocal dracut[1267]: *** Including module: virtiofs ***
Dec 05 09:00:43 np0005546606.novalocal dracut[1267]: *** Including module: dracut-systemd ***
Dec 05 09:00:43 np0005546606.novalocal dracut[1267]: *** Including module: usrmount ***
Dec 05 09:00:43 np0005546606.novalocal dracut[1267]: *** Including module: base ***
Dec 05 09:00:43 np0005546606.novalocal dracut[1267]: *** Including module: fs-lib ***
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]: *** Including module: kdumpbase ***
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:   microcode_ctl module: mangling fw_dir
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: configuration "intel" is ignored
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 05 09:00:44 np0005546606.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 05 09:00:45 np0005546606.novalocal dracut[1267]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 05 09:00:45 np0005546606.novalocal dracut[1267]: *** Including module: openssl ***
Dec 05 09:00:45 np0005546606.novalocal dracut[1267]: *** Including module: shutdown ***
Dec 05 09:00:45 np0005546606.novalocal dracut[1267]: *** Including module: squash ***
Dec 05 09:00:45 np0005546606.novalocal dracut[1267]: *** Including modules done ***
Dec 05 09:00:45 np0005546606.novalocal dracut[1267]: *** Installing kernel module dependencies ***
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: IRQ 25 affinity is now unmanaged
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: IRQ 31 affinity is now unmanaged
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: IRQ 28 affinity is now unmanaged
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: IRQ 32 affinity is now unmanaged
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: IRQ 30 affinity is now unmanaged
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 05 09:00:45 np0005546606.novalocal irqbalance[784]: IRQ 29 affinity is now unmanaged
Dec 05 09:00:45 np0005546606.novalocal dracut[1267]: *** Installing kernel module dependencies done ***
Dec 05 09:00:45 np0005546606.novalocal dracut[1267]: *** Resolving executable dependencies ***
Dec 05 09:00:47 np0005546606.novalocal dracut[1267]: *** Resolving executable dependencies done ***
Dec 05 09:00:47 np0005546606.novalocal dracut[1267]: *** Generating early-microcode cpio image ***
Dec 05 09:00:47 np0005546606.novalocal dracut[1267]: *** Store current command line parameters ***
Dec 05 09:00:47 np0005546606.novalocal dracut[1267]: Stored kernel commandline:
Dec 05 09:00:47 np0005546606.novalocal dracut[1267]: No dracut internal kernel commandline stored in the initramfs
Dec 05 09:00:47 np0005546606.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 09:00:47 np0005546606.novalocal dracut[1267]: *** Install squash loader ***
Dec 05 09:00:48 np0005546606.novalocal dracut[1267]: *** Squashing the files inside the initramfs ***
Dec 05 09:00:49 np0005546606.novalocal dracut[1267]: *** Squashing the files inside the initramfs done ***
Dec 05 09:00:49 np0005546606.novalocal dracut[1267]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec 05 09:00:49 np0005546606.novalocal dracut[1267]: *** Hardlinking files ***
Dec 05 09:00:49 np0005546606.novalocal dracut[1267]: Mode:           real
Dec 05 09:00:49 np0005546606.novalocal dracut[1267]: Files:          50
Dec 05 09:00:49 np0005546606.novalocal dracut[1267]: Linked:         0 files
Dec 05 09:00:49 np0005546606.novalocal dracut[1267]: Compared:       0 xattrs
Dec 05 09:00:49 np0005546606.novalocal dracut[1267]: Compared:       0 files
Dec 05 09:00:49 np0005546606.novalocal dracut[1267]: Saved:          0 B
Dec 05 09:00:49 np0005546606.novalocal dracut[1267]: Duration:       0.000832 seconds
Dec 05 09:00:49 np0005546606.novalocal dracut[1267]: *** Hardlinking files done ***
Dec 05 09:00:50 np0005546606.novalocal dracut[1267]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec 05 09:00:50 np0005546606.novalocal kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Dec 05 09:00:50 np0005546606.novalocal kdumpctl[1016]: kdump: Starting kdump: [OK]
Dec 05 09:00:50 np0005546606.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 05 09:00:50 np0005546606.novalocal systemd[1]: Startup finished in 2.017s (kernel) + 2.440s (initrd) + 16.390s (userspace) = 20.848s.
Dec 05 09:01:01 np0005546606.novalocal CROND[4294]: (root) CMD (run-parts /etc/cron.hourly)
Dec 05 09:01:01 np0005546606.novalocal run-parts[4297]: (/etc/cron.hourly) starting 0anacron
Dec 05 09:01:01 np0005546606.novalocal anacron[4305]: Anacron started on 2025-12-05
Dec 05 09:01:01 np0005546606.novalocal anacron[4305]: Will run job `cron.daily' in 13 min.
Dec 05 09:01:01 np0005546606.novalocal anacron[4305]: Will run job `cron.weekly' in 33 min.
Dec 05 09:01:01 np0005546606.novalocal anacron[4305]: Will run job `cron.monthly' in 53 min.
Dec 05 09:01:01 np0005546606.novalocal anacron[4305]: Jobs will be executed sequentially
Dec 05 09:01:01 np0005546606.novalocal run-parts[4307]: (/etc/cron.hourly) finished 0anacron
Dec 05 09:01:01 np0005546606.novalocal CROND[4293]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 05 09:01:06 np0005546606.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 09:01:08 np0005546606.novalocal sshd-session[4310]: Accepted publickey for zuul from 38.102.83.114 port 35992 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 05 09:01:08 np0005546606.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 05 09:01:08 np0005546606.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 05 09:01:08 np0005546606.novalocal systemd-logind[789]: New session 1 of user zuul.
Dec 05 09:01:08 np0005546606.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 05 09:01:08 np0005546606.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Queued start job for default target Main User Target.
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Created slice User Application Slice.
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Reached target Paths.
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Reached target Timers.
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Starting D-Bus User Message Bus Socket...
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Starting Create User's Volatile Files and Directories...
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Finished Create User's Volatile Files and Directories.
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Listening on D-Bus User Message Bus Socket.
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Reached target Sockets.
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Reached target Basic System.
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Reached target Main User Target.
Dec 05 09:01:08 np0005546606.novalocal systemd[4314]: Startup finished in 111ms.
Dec 05 09:01:08 np0005546606.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 05 09:01:08 np0005546606.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 05 09:01:08 np0005546606.novalocal sshd-session[4310]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:01:09 np0005546606.novalocal python3[4397]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:01:11 np0005546606.novalocal python3[4425]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:01:19 np0005546606.novalocal python3[4483]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:01:20 np0005546606.novalocal python3[4523]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 05 09:01:22 np0005546606.novalocal python3[4549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCktN10ZsyEhTNs93PdenY/LV3y/3VxDrSxmLf0iui2gamR+Oudb1I07MwWpVcDCKtAtB3z2kjNceS6vaTyw7LbkYJWshaxsWAR/TY+/bY3vudAJeJ03sw36rm06jYpDXXeovo+mPfw+OJHim9la9+9ZgO9SGHINXQMxgoqb87+Qru60RNid9flV7l7SKiStGYfdNfNgsfyPlRd36EH8dHifILNd8aqUIftMMCUNIZnktX0GuG2bAqzOkd/zXv49JU/qb7Sil0bXR2T/KihQwrGztnvSEYgDURo4K8p68YQLdhoz0t/vFfs5VbkwsTPhdQMdhdTepeGaMpOxBoiUfLMQRTJFRfQJIBrGSOpZ7iLxpzwqWkEojYe4qcgWVF+3PWGXSRzQaaa0TWcwtMJw2N+DU72QBLKsqNd4CWqykOMDrUMF7B8pYP27S2ADrpwZ+IcrPlnNiReEW7xH7m0BtmRxaQjClNQsxxkCfPwtZF5Bt7yU0ClEFxBX4wi1iC1+Ic= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:23 np0005546606.novalocal python3[4573]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:23 np0005546606.novalocal python3[4672]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:01:24 np0005546606.novalocal python3[4743]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764925283.353944-251-24737053306500/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=2b565096098346239f525f4f635d8b58_id_rsa follow=False checksum=54d731b5c42c1a9787c9f0f576ad046333fdca49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:24 np0005546606.novalocal python3[4866]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:01:24 np0005546606.novalocal python3[4937]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764925284.3431997-306-121525345334898/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=2b565096098346239f525f4f635d8b58_id_rsa.pub follow=False checksum=ba3d59c82c5d5f82f37a6affae35cd69f9c86f26 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:26 np0005546606.novalocal python3[4985]: ansible-ping Invoked with data=pong
Dec 05 09:01:27 np0005546606.novalocal python3[5009]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:01:29 np0005546606.novalocal python3[5067]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 05 09:01:30 np0005546606.novalocal python3[5099]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:31 np0005546606.novalocal python3[5123]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:31 np0005546606.novalocal python3[5147]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:31 np0005546606.novalocal python3[5171]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:31 np0005546606.novalocal python3[5195]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:32 np0005546606.novalocal python3[5219]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:33 np0005546606.novalocal sudo[5243]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wivgclhmrvihmvggrpmtsqelvcckmgnp ; /usr/bin/python3'
Dec 05 09:01:33 np0005546606.novalocal sudo[5243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:01:33 np0005546606.novalocal python3[5245]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:33 np0005546606.novalocal sudo[5243]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:34 np0005546606.novalocal sudo[5321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blvnkswjeixvvolbibtfbkxxvmtlwijm ; /usr/bin/python3'
Dec 05 09:01:34 np0005546606.novalocal sudo[5321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:01:34 np0005546606.novalocal python3[5323]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:01:34 np0005546606.novalocal sudo[5321]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:34 np0005546606.novalocal sudo[5394]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciledzcuettcoedzvunsqemvbhssejsu ; /usr/bin/python3'
Dec 05 09:01:34 np0005546606.novalocal sudo[5394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:01:35 np0005546606.novalocal python3[5396]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764925293.9463756-31-137445077235049/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:35 np0005546606.novalocal sudo[5394]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:35 np0005546606.novalocal python3[5444]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:35 np0005546606.novalocal python3[5468]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:36 np0005546606.novalocal python3[5492]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:36 np0005546606.novalocal python3[5516]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:36 np0005546606.novalocal python3[5540]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:36 np0005546606.novalocal python3[5564]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:37 np0005546606.novalocal python3[5588]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:39 np0005546606.novalocal python3[5612]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:39 np0005546606.novalocal python3[5636]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:39 np0005546606.novalocal python3[5660]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:40 np0005546606.novalocal python3[5684]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:40 np0005546606.novalocal python3[5708]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:40 np0005546606.novalocal python3[5732]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:40 np0005546606.novalocal python3[5756]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:41 np0005546606.novalocal python3[5780]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:41 np0005546606.novalocal python3[5804]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:41 np0005546606.novalocal python3[5828]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:41 np0005546606.novalocal python3[5852]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:42 np0005546606.novalocal python3[5876]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:42 np0005546606.novalocal python3[5900]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:42 np0005546606.novalocal python3[5924]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:43 np0005546606.novalocal python3[5948]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:43 np0005546606.novalocal python3[5972]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:43 np0005546606.novalocal python3[5996]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:43 np0005546606.novalocal python3[6020]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:44 np0005546606.novalocal python3[6044]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:01:45 np0005546606.novalocal sudo[6068]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muzuddtnobcbyrferyjtmcdizpowvtmz ; /usr/bin/python3'
Dec 05 09:01:45 np0005546606.novalocal sudo[6068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:01:45 np0005546606.novalocal python3[6070]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 05 09:01:45 np0005546606.novalocal systemd[1]: Starting Time & Date Service...
Dec 05 09:01:45 np0005546606.novalocal systemd[1]: Started Time & Date Service.
Dec 05 09:01:45 np0005546606.novalocal systemd-timedated[6072]: Changed time zone to 'UTC' (UTC).
Dec 05 09:01:45 np0005546606.novalocal sudo[6068]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:45 np0005546606.novalocal sudo[6099]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvzwlxqmvabixvxhumxcjtdquboywtww ; /usr/bin/python3'
Dec 05 09:01:45 np0005546606.novalocal sudo[6099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:01:46 np0005546606.novalocal python3[6101]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:46 np0005546606.novalocal sudo[6099]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:46 np0005546606.novalocal python3[6177]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:01:46 np0005546606.novalocal python3[6248]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764925306.249787-251-73381127936340/source _original_basename=tmpvc5ku2sy follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:47 np0005546606.novalocal python3[6348]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:01:47 np0005546606.novalocal python3[6419]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764925307.1084895-301-70617973210184/source _original_basename=tmp8utziv8n follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:48 np0005546606.novalocal sudo[6519]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwcqjiqbiljoloahhbrwjssymdvkbwdu ; /usr/bin/python3'
Dec 05 09:01:48 np0005546606.novalocal sudo[6519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:01:48 np0005546606.novalocal python3[6521]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:01:48 np0005546606.novalocal sudo[6519]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:48 np0005546606.novalocal sudo[6592]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovmyjkwqhqynwrbsquvkeeaixpjaoclq ; /usr/bin/python3'
Dec 05 09:01:48 np0005546606.novalocal sudo[6592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:01:48 np0005546606.novalocal python3[6594]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764925308.2346113-381-3199321595946/source _original_basename=tmpxm46_smq follow=False checksum=f07c805834277da0cbee63ff582683dc2ed910d5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:48 np0005546606.novalocal sudo[6592]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:49 np0005546606.novalocal python3[6642]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:01:49 np0005546606.novalocal python3[6668]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:01:49 np0005546606.novalocal sudo[6746]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdawljbjbbkjfplyzbgzojoaqabemgbb ; /usr/bin/python3'
Dec 05 09:01:49 np0005546606.novalocal sudo[6746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:01:50 np0005546606.novalocal python3[6748]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:01:50 np0005546606.novalocal sudo[6746]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:50 np0005546606.novalocal sudo[6819]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anzbxcvuvliaafxagxrpuzqubjlzhkob ; /usr/bin/python3'
Dec 05 09:01:50 np0005546606.novalocal sudo[6819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:01:50 np0005546606.novalocal python3[6821]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764925309.8629324-451-150189187366844/source _original_basename=tmpnf1aa5st follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:01:50 np0005546606.novalocal sudo[6819]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:50 np0005546606.novalocal sudo[6870]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiuxmgyylmrqfgnqvyufaivqhgswhtnz ; /usr/bin/python3'
Dec 05 09:01:50 np0005546606.novalocal sudo[6870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:01:51 np0005546606.novalocal python3[6872]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-933c-c72c-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:01:51 np0005546606.novalocal sudo[6870]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:51 np0005546606.novalocal python3[6900]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-933c-c72c-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 05 09:01:53 np0005546606.novalocal python3[6929]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:02:14 np0005546606.novalocal sudo[6953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbybdzobwdnlnrflwioexibvuhdtzfny ; /usr/bin/python3'
Dec 05 09:02:14 np0005546606.novalocal sudo[6953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:02:14 np0005546606.novalocal python3[6955]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:02:14 np0005546606.novalocal sudo[6953]: pam_unix(sudo:session): session closed for user root
Dec 05 09:02:15 np0005546606.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 09:02:55 np0005546606.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 05 09:02:55 np0005546606.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 05 09:02:55 np0005546606.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 05 09:02:55 np0005546606.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 05 09:02:55 np0005546606.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 05 09:02:55 np0005546606.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 05 09:02:55 np0005546606.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 05 09:02:55 np0005546606.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 05 09:02:55 np0005546606.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 05 09:02:55 np0005546606.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 05 09:02:55 np0005546606.novalocal NetworkManager[858]: <info>  [1764925375.2546] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 05 09:02:55 np0005546606.novalocal systemd-udevd[6958]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 09:02:55 np0005546606.novalocal NetworkManager[858]: <info>  [1764925375.2755] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:02:55 np0005546606.novalocal NetworkManager[858]: <info>  [1764925375.2801] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 05 09:02:55 np0005546606.novalocal NetworkManager[858]: <info>  [1764925375.2807] device (eth1): carrier: link connected
Dec 05 09:02:55 np0005546606.novalocal NetworkManager[858]: <info>  [1764925375.2810] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 05 09:02:55 np0005546606.novalocal NetworkManager[858]: <info>  [1764925375.2820] policy: auto-activating connection 'Wired connection 1' (e7fb5895-bcdf-3b3c-8ddf-f78dbcafe155)
Dec 05 09:02:55 np0005546606.novalocal NetworkManager[858]: <info>  [1764925375.2826] device (eth1): Activation: starting connection 'Wired connection 1' (e7fb5895-bcdf-3b3c-8ddf-f78dbcafe155)
Dec 05 09:02:55 np0005546606.novalocal NetworkManager[858]: <info>  [1764925375.2827] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:02:55 np0005546606.novalocal NetworkManager[858]: <info>  [1764925375.2831] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:02:55 np0005546606.novalocal NetworkManager[858]: <info>  [1764925375.2837] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:02:55 np0005546606.novalocal NetworkManager[858]: <info>  [1764925375.2845] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 05 09:02:56 np0005546606.novalocal python3[6985]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-9bb8-c7fb-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:03:06 np0005546606.novalocal sudo[7063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dudgseztgrvoeayetbhoensraguibtws ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 05 09:03:06 np0005546606.novalocal sudo[7063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:03:06 np0005546606.novalocal python3[7065]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:03:06 np0005546606.novalocal sudo[7063]: pam_unix(sudo:session): session closed for user root
Dec 05 09:03:06 np0005546606.novalocal sudo[7136]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cemwyovzxxhqtghanifgjdwemgwufdbp ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 05 09:03:06 np0005546606.novalocal sudo[7136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:03:06 np0005546606.novalocal python3[7138]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764925386.1502752-104-105904494433874/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=31db335a6bc211e4495fa0878829ca9993d2f9dc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:03:06 np0005546606.novalocal sudo[7136]: pam_unix(sudo:session): session closed for user root
Dec 05 09:03:07 np0005546606.novalocal sudo[7186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flixuaegrpgtanvcuagfldiwqlcceohq ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 05 09:03:07 np0005546606.novalocal sudo[7186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:03:07 np0005546606.novalocal python3[7188]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: Stopping Network Manager...
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[858]: <info>  [1764925387.6963] caught SIGTERM, shutting down normally.
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[858]: <info>  [1764925387.6971] dhcp4 (eth0): canceled DHCP transaction
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[858]: <info>  [1764925387.6971] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[858]: <info>  [1764925387.6972] dhcp4 (eth0): state changed no lease
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[858]: <info>  [1764925387.6974] manager: NetworkManager state is now CONNECTING
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[858]: <info>  [1764925387.7062] dhcp4 (eth1): canceled DHCP transaction
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[858]: <info>  [1764925387.7063] dhcp4 (eth1): state changed no lease
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[858]: <info>  [1764925387.7130] exiting (success)
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: Stopped Network Manager.
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: NetworkManager.service: Consumed 1.101s CPU time, 10.1M memory peak.
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: Starting Network Manager...
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.7717] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:77fa800c-2983-4f5e-b315-57495a3fe27a)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.7720] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.7768] manager[0x557242b3a070]: monitoring kernel firmware directory '/lib/firmware'.
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: Starting Hostname Service...
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: Started Hostname Service.
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8614] hostname: hostname: using hostnamed
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8615] hostname: static hostname changed from (none) to "np0005546606.novalocal"
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8623] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8631] manager[0x557242b3a070]: rfkill: Wi-Fi hardware radio set enabled
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8632] manager[0x557242b3a070]: rfkill: WWAN hardware radio set enabled
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8678] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8678] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8680] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8681] manager: Networking is enabled by state file
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8685] settings: Loaded settings plugin: keyfile (internal)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8691] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8737] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8752] dhcp: init: Using DHCP client 'internal'
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8757] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8767] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8777] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8791] device (lo): Activation: starting connection 'lo' (f6d82822-cadb-414c-ae68-8f6717460373)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8803] device (eth0): carrier: link connected
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8810] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8819] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8820] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8833] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8845] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8855] device (eth1): carrier: link connected
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8861] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8871] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (e7fb5895-bcdf-3b3c-8ddf-f78dbcafe155) (indicated)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8871] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8881] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8892] device (eth1): Activation: starting connection 'Wired connection 1' (e7fb5895-bcdf-3b3c-8ddf-f78dbcafe155)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8903] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: Started Network Manager.
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8911] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8916] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8919] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8924] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8929] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8934] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8939] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8945] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8956] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8961] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8991] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.8996] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.9020] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.9025] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 05 09:03:07 np0005546606.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 05 09:03:07 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925387.9030] device (lo): Activation: successful, device activated.
Dec 05 09:03:07 np0005546606.novalocal sudo[7186]: pam_unix(sudo:session): session closed for user root
Dec 05 09:03:08 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925388.0435] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Dec 05 09:03:08 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925388.0445] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 05 09:03:08 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925388.0527] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 05 09:03:08 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925388.0564] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 05 09:03:08 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925388.0566] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 05 09:03:08 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925388.0571] manager: NetworkManager state is now CONNECTED_SITE
Dec 05 09:03:08 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925388.0576] device (eth0): Activation: successful, device activated.
Dec 05 09:03:08 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925388.0582] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 05 09:03:08 np0005546606.novalocal python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-9bb8-c7fb-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:03:18 np0005546606.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 09:03:37 np0005546606.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9175] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 05 09:03:52 np0005546606.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 09:03:52 np0005546606.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9541] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9548] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9568] device (eth1): Activation: successful, device activated.
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9583] manager: startup complete
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9586] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <warn>  [1764925432.9608] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9630] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 05 09:03:52 np0005546606.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9763] dhcp4 (eth1): canceled DHCP transaction
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9763] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9763] dhcp4 (eth1): state changed no lease
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9795] policy: auto-activating connection 'ci-private-network' (d1d8c6ca-28b5-552c-8427-579a453c92d6)
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9804] device (eth1): Activation: starting connection 'ci-private-network' (d1d8c6ca-28b5-552c-8427-579a453c92d6)
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9807] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9818] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9835] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9856] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9917] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9925] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:03:52 np0005546606.novalocal NetworkManager[7200]: <info>  [1764925432.9950] device (eth1): Activation: successful, device activated.
Dec 05 09:04:03 np0005546606.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 09:04:05 np0005546606.novalocal systemd[4314]: Starting Mark boot as successful...
Dec 05 09:04:05 np0005546606.novalocal systemd[4314]: Finished Mark boot as successful.
Dec 05 09:04:08 np0005546606.novalocal sshd-session[4324]: Received disconnect from 38.102.83.114 port 35992:11: disconnected by user
Dec 05 09:04:08 np0005546606.novalocal sshd-session[4324]: Disconnected from user zuul 38.102.83.114 port 35992
Dec 05 09:04:08 np0005546606.novalocal sshd-session[4310]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:04:08 np0005546606.novalocal systemd-logind[789]: Session 1 logged out. Waiting for processes to exit.
Dec 05 09:05:31 np0005546606.novalocal sshd-session[7301]: Accepted publickey for zuul from 38.102.83.114 port 40842 ssh2: RSA SHA256:KFmgdvKpB8DAdlN2nfDmmuFckJgJGHDMrTR5Gyr7RXM
Dec 05 09:05:31 np0005546606.novalocal systemd-logind[789]: New session 3 of user zuul.
Dec 05 09:05:31 np0005546606.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 05 09:05:31 np0005546606.novalocal sshd-session[7301]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:05:31 np0005546606.novalocal sudo[7380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvzfgdvlmrgelbktkgdyvunldsafggff ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 05 09:05:31 np0005546606.novalocal sudo[7380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:05:31 np0005546606.novalocal python3[7382]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:05:31 np0005546606.novalocal sudo[7380]: pam_unix(sudo:session): session closed for user root
Dec 05 09:05:31 np0005546606.novalocal sudo[7453]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqxggymhuwbarkeapibgbumrmuiyfaa ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 05 09:05:31 np0005546606.novalocal sudo[7453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:05:32 np0005546606.novalocal python3[7455]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764925531.3568544-373-140746398129074/source _original_basename=tmpizav4g65 follow=False checksum=95f92515c0b530b06af3a9429013e12737568c06 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:05:32 np0005546606.novalocal sudo[7453]: pam_unix(sudo:session): session closed for user root
Dec 05 09:05:36 np0005546606.novalocal sshd-session[7304]: Connection closed by 38.102.83.114 port 40842
Dec 05 09:05:36 np0005546606.novalocal sshd-session[7301]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:05:36 np0005546606.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 05 09:05:36 np0005546606.novalocal systemd-logind[789]: Session 3 logged out. Waiting for processes to exit.
Dec 05 09:05:36 np0005546606.novalocal systemd-logind[789]: Removed session 3.
Dec 05 09:07:05 np0005546606.novalocal systemd[4314]: Created slice User Background Tasks Slice.
Dec 05 09:07:05 np0005546606.novalocal systemd[4314]: Starting Cleanup of User's Temporary Files and Directories...
Dec 05 09:07:05 np0005546606.novalocal systemd[4314]: Finished Cleanup of User's Temporary Files and Directories.
Dec 05 09:12:04 np0005546606.novalocal sshd-session[7486]: Accepted publickey for zuul from 38.102.83.114 port 40938 ssh2: RSA SHA256:KFmgdvKpB8DAdlN2nfDmmuFckJgJGHDMrTR5Gyr7RXM
Dec 05 09:12:04 np0005546606.novalocal systemd-logind[789]: New session 4 of user zuul.
Dec 05 09:12:04 np0005546606.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 05 09:12:04 np0005546606.novalocal sshd-session[7486]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:12:04 np0005546606.novalocal sudo[7513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgjrxxdaonqkrandviwmzidxarbtpawc ; /usr/bin/python3'
Dec 05 09:12:04 np0005546606.novalocal sudo[7513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:04 np0005546606.novalocal python3[7515]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-1731-dc68-000000001cde-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:12:04 np0005546606.novalocal sudo[7513]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:05 np0005546606.novalocal sudo[7542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqtnnmyxuakkhmboauqdugbqbqsebyrn ; /usr/bin/python3'
Dec 05 09:12:05 np0005546606.novalocal sudo[7542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:05 np0005546606.novalocal python3[7544]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:12:05 np0005546606.novalocal sudo[7542]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:05 np0005546606.novalocal sudo[7568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnsyfzbulxskpzqfuandippduyyijlxn ; /usr/bin/python3'
Dec 05 09:12:05 np0005546606.novalocal sudo[7568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:05 np0005546606.novalocal python3[7570]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:12:05 np0005546606.novalocal sudo[7568]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:05 np0005546606.novalocal sudo[7594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iswpjewixacdnxybklclepargrpxzebo ; /usr/bin/python3'
Dec 05 09:12:05 np0005546606.novalocal sudo[7594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:05 np0005546606.novalocal python3[7596]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:12:05 np0005546606.novalocal sudo[7594]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:05 np0005546606.novalocal sudo[7620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcfzjtwllydauyhexvvmzourgaujkmzi ; /usr/bin/python3'
Dec 05 09:12:05 np0005546606.novalocal sudo[7620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:05 np0005546606.novalocal python3[7622]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:12:06 np0005546606.novalocal sudo[7620]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:06 np0005546606.novalocal sudo[7646]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjcxvivxewmraskwpnxvqtuqduztycmu ; /usr/bin/python3'
Dec 05 09:12:06 np0005546606.novalocal sudo[7646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:06 np0005546606.novalocal python3[7648]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:12:06 np0005546606.novalocal sudo[7646]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:07 np0005546606.novalocal sudo[7724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuvvvxxzyrejasvgxcohbvuxdypnjiaq ; /usr/bin/python3'
Dec 05 09:12:07 np0005546606.novalocal sudo[7724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:07 np0005546606.novalocal python3[7726]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:12:07 np0005546606.novalocal sudo[7724]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:07 np0005546606.novalocal sudo[7797]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnavegylneqzcfhnygrxwmtiijelvsxt ; /usr/bin/python3'
Dec 05 09:12:07 np0005546606.novalocal sudo[7797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:07 np0005546606.novalocal python3[7799]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764925926.9953144-519-230044561645821/source _original_basename=tmpmxsfyzg5 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:12:07 np0005546606.novalocal sudo[7797]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:08 np0005546606.novalocal sudo[7847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftkrypmcnbiaitfyucvzntlghpvkhtgb ; /usr/bin/python3'
Dec 05 09:12:08 np0005546606.novalocal sudo[7847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:08 np0005546606.novalocal python3[7849]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:12:08 np0005546606.novalocal systemd[1]: Reloading.
Dec 05 09:12:08 np0005546606.novalocal systemd-rc-local-generator[7867]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:12:09 np0005546606.novalocal sudo[7847]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:10 np0005546606.novalocal sudo[7902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rowegfwnogzmojliukkezbiezrsjvtby ; /usr/bin/python3'
Dec 05 09:12:10 np0005546606.novalocal sudo[7902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:10 np0005546606.novalocal python3[7904]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 05 09:12:10 np0005546606.novalocal sudo[7902]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:10 np0005546606.novalocal sudo[7928]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twnbfbapzdkrlrdbwqpenvoniyhcyfxk ; /usr/bin/python3'
Dec 05 09:12:10 np0005546606.novalocal sudo[7928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:10 np0005546606.novalocal python3[7930]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:12:10 np0005546606.novalocal sudo[7928]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:10 np0005546606.novalocal sudo[7956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhouavmtzbelgrskfdeazuophbfocaek ; /usr/bin/python3'
Dec 05 09:12:10 np0005546606.novalocal sudo[7956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:11 np0005546606.novalocal python3[7958]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:12:11 np0005546606.novalocal sudo[7956]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:11 np0005546606.novalocal sudo[7984]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kngcfgadinwosgvikfkrqiuvnaxdnjcg ; /usr/bin/python3'
Dec 05 09:12:11 np0005546606.novalocal sudo[7984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:11 np0005546606.novalocal python3[7986]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:12:11 np0005546606.novalocal sudo[7984]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:11 np0005546606.novalocal sudo[8012]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilxvosgypmdwogniytmmuttrcbzggnpy ; /usr/bin/python3'
Dec 05 09:12:11 np0005546606.novalocal sudo[8012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:11 np0005546606.novalocal python3[8014]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:12:11 np0005546606.novalocal sudo[8012]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:12 np0005546606.novalocal python3[8041]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-1731-dc68-000000001ce5-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:12:12 np0005546606.novalocal python3[8071]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 09:12:15 np0005546606.novalocal sshd-session[7489]: Connection closed by 38.102.83.114 port 40938
Dec 05 09:12:15 np0005546606.novalocal sshd-session[7486]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:12:15 np0005546606.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 05 09:12:15 np0005546606.novalocal systemd[1]: session-4.scope: Consumed 4.124s CPU time.
Dec 05 09:12:15 np0005546606.novalocal systemd-logind[789]: Session 4 logged out. Waiting for processes to exit.
Dec 05 09:12:15 np0005546606.novalocal systemd-logind[789]: Removed session 4.
Dec 05 09:12:17 np0005546606.novalocal sshd-session[8077]: Accepted publickey for zuul from 38.102.83.114 port 45870 ssh2: RSA SHA256:KFmgdvKpB8DAdlN2nfDmmuFckJgJGHDMrTR5Gyr7RXM
Dec 05 09:12:17 np0005546606.novalocal systemd-logind[789]: New session 5 of user zuul.
Dec 05 09:12:17 np0005546606.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 05 09:12:17 np0005546606.novalocal sshd-session[8077]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:12:17 np0005546606.novalocal sudo[8104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqhgqeuukgaljxfjaqpkmsljtpeoqdxs ; /usr/bin/python3'
Dec 05 09:12:17 np0005546606.novalocal sudo[8104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:17 np0005546606.novalocal python3[8106]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 09:12:36 np0005546606.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 05 09:12:36 np0005546606.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:12:36 np0005546606.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 05 09:12:36 np0005546606.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:12:36 np0005546606.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:12:36 np0005546606.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:12:36 np0005546606.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:12:36 np0005546606.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:12:47 np0005546606.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 05 09:12:47 np0005546606.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:12:47 np0005546606.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 05 09:12:47 np0005546606.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:12:47 np0005546606.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:12:47 np0005546606.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:12:47 np0005546606.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:12:47 np0005546606.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:12:57 np0005546606.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 05 09:12:57 np0005546606.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:12:57 np0005546606.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 05 09:12:57 np0005546606.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:12:57 np0005546606.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:12:57 np0005546606.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:12:57 np0005546606.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:12:57 np0005546606.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:12:59 np0005546606.novalocal setsebool[8172]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 05 09:12:59 np0005546606.novalocal setsebool[8172]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 05 09:13:11 np0005546606.novalocal kernel: SELinux:  Converting 389 SID table entries...
Dec 05 09:13:11 np0005546606.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:13:11 np0005546606.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 05 09:13:11 np0005546606.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:13:11 np0005546606.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:13:11 np0005546606.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:13:11 np0005546606.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:13:11 np0005546606.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:13:31 np0005546606.novalocal dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 05 09:13:31 np0005546606.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:13:31 np0005546606.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 05 09:13:31 np0005546606.novalocal systemd[1]: Reloading.
Dec 05 09:13:31 np0005546606.novalocal systemd-rc-local-generator[8922]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:13:31 np0005546606.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 09:13:32 np0005546606.novalocal sudo[8104]: pam_unix(sudo:session): session closed for user root
Dec 05 09:13:37 np0005546606.novalocal python3[13937]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-b39a-b4ba-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:13:39 np0005546606.novalocal kernel: evm: overlay not supported
Dec 05 09:13:39 np0005546606.novalocal systemd[4314]: Starting D-Bus User Message Bus...
Dec 05 09:13:39 np0005546606.novalocal dbus-broker-launch[14635]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 05 09:13:39 np0005546606.novalocal dbus-broker-launch[14635]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 05 09:13:39 np0005546606.novalocal systemd[4314]: Started D-Bus User Message Bus.
Dec 05 09:13:39 np0005546606.novalocal dbus-broker-lau[14635]: Ready
Dec 05 09:13:39 np0005546606.novalocal systemd[4314]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 05 09:13:39 np0005546606.novalocal systemd[4314]: Created slice Slice /user.
Dec 05 09:13:39 np0005546606.novalocal systemd[4314]: podman-14476.scope: unit configures an IP firewall, but not running as root.
Dec 05 09:13:39 np0005546606.novalocal systemd[4314]: (This warning is only shown for the first unit using IP firewalling.)
Dec 05 09:13:39 np0005546606.novalocal systemd[4314]: Started podman-14476.scope.
Dec 05 09:13:39 np0005546606.novalocal systemd[4314]: Started podman-pause-afee47f2.scope.
Dec 05 09:13:40 np0005546606.novalocal sshd-session[8080]: Connection closed by 38.102.83.114 port 45870
Dec 05 09:13:40 np0005546606.novalocal sshd-session[8077]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:13:40 np0005546606.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 05 09:13:40 np0005546606.novalocal systemd[1]: session-5.scope: Consumed 1min 7.121s CPU time.
Dec 05 09:13:40 np0005546606.novalocal systemd-logind[789]: Session 5 logged out. Waiting for processes to exit.
Dec 05 09:13:40 np0005546606.novalocal systemd-logind[789]: Removed session 5.
Dec 05 09:13:55 np0005546606.novalocal sshd-session[22136]: Unable to negotiate with 38.129.56.31 port 39826: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 05 09:13:55 np0005546606.novalocal sshd-session[22134]: Unable to negotiate with 38.129.56.31 port 39834: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 05 09:13:55 np0005546606.novalocal sshd-session[22139]: Connection closed by 38.129.56.31 port 39812 [preauth]
Dec 05 09:13:55 np0005546606.novalocal sshd-session[22137]: Unable to negotiate with 38.129.56.31 port 39832: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 05 09:13:55 np0005546606.novalocal sshd-session[22141]: Connection closed by 38.129.56.31 port 39800 [preauth]
Dec 05 09:14:01 np0005546606.novalocal anacron[4305]: Job `cron.daily' started
Dec 05 09:14:01 np0005546606.novalocal anacron[4305]: Job `cron.daily' terminated
Dec 05 09:14:01 np0005546606.novalocal sshd-session[24637]: Accepted publickey for zuul from 38.102.83.114 port 49574 ssh2: RSA SHA256:KFmgdvKpB8DAdlN2nfDmmuFckJgJGHDMrTR5Gyr7RXM
Dec 05 09:14:01 np0005546606.novalocal systemd-logind[789]: New session 6 of user zuul.
Dec 05 09:14:01 np0005546606.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 05 09:14:01 np0005546606.novalocal sshd-session[24637]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:14:01 np0005546606.novalocal python3[24753]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNfQih8jgUGmFPzQd1+vzz/gDzG7ggJtFCAkilkjjKMrcYZVeOYW0ztyMJFAiGcrStAGMSMhwyAvgZEt3AudePU= zuul@np0005546605.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:14:02 np0005546606.novalocal sudo[24973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlmiiljxcghraeenivbzxhrnzhmzatel ; /usr/bin/python3'
Dec 05 09:14:02 np0005546606.novalocal sudo[24973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:14:02 np0005546606.novalocal python3[24986]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNfQih8jgUGmFPzQd1+vzz/gDzG7ggJtFCAkilkjjKMrcYZVeOYW0ztyMJFAiGcrStAGMSMhwyAvgZEt3AudePU= zuul@np0005546605.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:14:02 np0005546606.novalocal sudo[24973]: pam_unix(sudo:session): session closed for user root
Dec 05 09:14:03 np0005546606.novalocal sudo[25477]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orllqlzeqdsokleysafftizunuaemnxb ; /usr/bin/python3'
Dec 05 09:14:03 np0005546606.novalocal sudo[25477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:14:03 np0005546606.novalocal python3[25490]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546606.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 05 09:14:03 np0005546606.novalocal useradd[25591]: new group: name=cloud-admin, GID=1002
Dec 05 09:14:03 np0005546606.novalocal useradd[25591]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 05 09:14:03 np0005546606.novalocal sudo[25477]: pam_unix(sudo:session): session closed for user root
Dec 05 09:14:03 np0005546606.novalocal sudo[25784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcdurguenqffblliniwmfgzmhxmzqrio ; /usr/bin/python3'
Dec 05 09:14:03 np0005546606.novalocal sudo[25784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:14:03 np0005546606.novalocal python3[25794]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNfQih8jgUGmFPzQd1+vzz/gDzG7ggJtFCAkilkjjKMrcYZVeOYW0ztyMJFAiGcrStAGMSMhwyAvgZEt3AudePU= zuul@np0005546605.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 09:14:03 np0005546606.novalocal sudo[25784]: pam_unix(sudo:session): session closed for user root
Dec 05 09:14:04 np0005546606.novalocal sudo[26110]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imapwhuvlvulfrhlvrjkhruyeqgtzbqc ; /usr/bin/python3'
Dec 05 09:14:04 np0005546606.novalocal sudo[26110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:14:04 np0005546606.novalocal python3[26119]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:14:04 np0005546606.novalocal sudo[26110]: pam_unix(sudo:session): session closed for user root
Dec 05 09:14:04 np0005546606.novalocal sudo[26391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnaruikheclvgwoyotmsogrxsmcldfyf ; /usr/bin/python3'
Dec 05 09:14:04 np0005546606.novalocal sudo[26391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:14:04 np0005546606.novalocal python3[26401]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926044.1434586-150-125236829742247/source _original_basename=tmpo_t4dbzf follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:14:04 np0005546606.novalocal sudo[26391]: pam_unix(sudo:session): session closed for user root
Dec 05 09:14:05 np0005546606.novalocal sudo[26789]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeaeqlecxkvadjxburewpmqrsbfxgpzd ; /usr/bin/python3'
Dec 05 09:14:05 np0005546606.novalocal sudo[26789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:14:05 np0005546606.novalocal python3[26795]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 05 09:14:05 np0005546606.novalocal systemd[1]: Starting Hostname Service...
Dec 05 09:14:05 np0005546606.novalocal systemd[1]: Started Hostname Service.
Dec 05 09:14:05 np0005546606.novalocal systemd-hostnamed[26885]: Changed pretty hostname to 'compute-0'
Dec 05 09:14:05 compute-0 systemd-hostnamed[26885]: Hostname set to <compute-0> (static)
Dec 05 09:14:05 compute-0 NetworkManager[7200]: <info>  [1764926045.9687] hostname: static hostname changed from "np0005546606.novalocal" to "compute-0"
Dec 05 09:14:05 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 09:14:05 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 09:14:06 compute-0 sudo[26789]: pam_unix(sudo:session): session closed for user root
Dec 05 09:14:06 compute-0 sshd-session[24688]: Connection closed by 38.102.83.114 port 49574
Dec 05 09:14:06 compute-0 sshd-session[24637]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:14:06 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 05 09:14:06 compute-0 systemd[1]: session-6.scope: Consumed 2.243s CPU time.
Dec 05 09:14:06 compute-0 systemd-logind[789]: Session 6 logged out. Waiting for processes to exit.
Dec 05 09:14:06 compute-0 systemd-logind[789]: Removed session 6.
Dec 05 09:14:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 09:14:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 09:14:13 compute-0 systemd[1]: man-db-cache-update.service: Consumed 49.964s CPU time.
Dec 05 09:14:13 compute-0 systemd[1]: run-rbe693e2b888d4b3781b1474c38696e97.service: Deactivated successfully.
Dec 05 09:14:15 compute-0 irqbalance[784]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 05 09:14:15 compute-0 irqbalance[784]: IRQ 27 affinity is now unmanaged
Dec 05 09:14:16 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 09:14:36 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 09:15:45 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 05 09:15:45 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 05 09:15:45 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 05 09:15:45 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 05 09:18:52 compute-0 sshd-session[29965]: Accepted publickey for zuul from 38.129.56.31 port 39484 ssh2: RSA SHA256:KFmgdvKpB8DAdlN2nfDmmuFckJgJGHDMrTR5Gyr7RXM
Dec 05 09:18:52 compute-0 systemd-logind[789]: New session 7 of user zuul.
Dec 05 09:18:52 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 05 09:18:52 compute-0 sshd-session[29965]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:18:53 compute-0 python3[30041]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:18:55 compute-0 sudo[30155]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qabvdxdihrzqmltryppxxtgyphmqplpx ; /usr/bin/python3'
Dec 05 09:18:55 compute-0 sudo[30155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:55 compute-0 python3[30157]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:18:55 compute-0 sudo[30155]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:56 compute-0 sudo[30228]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nefqtxznlfzvidaqunhsclbjybtyguxj ; /usr/bin/python3'
Dec 05 09:18:56 compute-0 sudo[30228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:56 compute-0 python3[30230]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764926335.4783492-34041-110952439515819/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:18:56 compute-0 sudo[30228]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:56 compute-0 sudo[30254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdqetxmudulrkytvgxiallcgbbgwchnq ; /usr/bin/python3'
Dec 05 09:18:56 compute-0 sudo[30254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:56 compute-0 python3[30256]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:18:56 compute-0 sudo[30254]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:56 compute-0 sudo[30327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfegqathoktjnnpgbinydtqkgpxkhoft ; /usr/bin/python3'
Dec 05 09:18:56 compute-0 sudo[30327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:57 compute-0 python3[30329]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764926335.4783492-34041-110952439515819/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:18:57 compute-0 sudo[30327]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:57 compute-0 sudo[30353]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cazaphnxjmvmgavdlegfybhwhnpjpfhs ; /usr/bin/python3'
Dec 05 09:18:57 compute-0 sudo[30353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:57 compute-0 python3[30355]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:18:57 compute-0 sudo[30353]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:57 compute-0 sudo[30426]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzzfdfjvqfrysrnfzqbuxrrzcmgddnzz ; /usr/bin/python3'
Dec 05 09:18:57 compute-0 sudo[30426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:57 compute-0 python3[30428]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764926335.4783492-34041-110952439515819/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:18:57 compute-0 sudo[30426]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:57 compute-0 sudo[30452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-manmkchssoyirnjmlxyqipxipthvwcvo ; /usr/bin/python3'
Dec 05 09:18:57 compute-0 sudo[30452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:57 compute-0 python3[30454]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:18:57 compute-0 sudo[30452]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:58 compute-0 sudo[30525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afmzqwwqzbvfslqzgiodwtocawheemep ; /usr/bin/python3'
Dec 05 09:18:58 compute-0 sudo[30525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:58 compute-0 python3[30527]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764926335.4783492-34041-110952439515819/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:18:58 compute-0 sudo[30525]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:58 compute-0 sudo[30551]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuqjywaazphyelafqpzhsepavstdepwy ; /usr/bin/python3'
Dec 05 09:18:58 compute-0 sudo[30551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:58 compute-0 python3[30553]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:18:58 compute-0 sudo[30551]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:58 compute-0 sudo[30624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpgwojeklvttryvefygzxwdknslvlbhd ; /usr/bin/python3'
Dec 05 09:18:58 compute-0 sudo[30624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:58 compute-0 python3[30626]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764926335.4783492-34041-110952439515819/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:18:58 compute-0 sudo[30624]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:59 compute-0 sudo[30650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uekdqefuteioppzwwfwufrgfzfehzjhk ; /usr/bin/python3'
Dec 05 09:18:59 compute-0 sudo[30650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:59 compute-0 python3[30652]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:18:59 compute-0 sudo[30650]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:59 compute-0 sudo[30723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsuniivryycedpqdmwdpzvsloujbvbpq ; /usr/bin/python3'
Dec 05 09:18:59 compute-0 sudo[30723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:59 compute-0 python3[30725]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764926335.4783492-34041-110952439515819/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:18:59 compute-0 sudo[30723]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:59 compute-0 sudo[30749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbrrazwcynrltzqrzmtrrkxjqsletsnu ; /usr/bin/python3'
Dec 05 09:18:59 compute-0 sudo[30749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:59 compute-0 python3[30751]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:18:59 compute-0 sudo[30749]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:00 compute-0 sudo[30822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pifotvcqpkwsklzxocclhcrhrheykmlc ; /usr/bin/python3'
Dec 05 09:19:00 compute-0 sudo[30822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:00 compute-0 python3[30824]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764926335.4783492-34041-110952439515819/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:19:00 compute-0 sudo[30822]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:03 compute-0 sshd-session[30850]: Unable to negotiate with 192.168.122.11 port 34804: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 05 09:19:03 compute-0 sshd-session[30849]: Connection closed by 192.168.122.11 port 34776 [preauth]
Dec 05 09:19:03 compute-0 sshd-session[30851]: Connection closed by 192.168.122.11 port 34790 [preauth]
Dec 05 09:19:03 compute-0 sshd-session[30853]: Unable to negotiate with 192.168.122.11 port 34820: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 05 09:19:03 compute-0 sshd-session[30852]: Unable to negotiate with 192.168.122.11 port 34834: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 05 09:19:15 compute-0 python3[30882]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:24:15 compute-0 sshd-session[29968]: Received disconnect from 38.129.56.31 port 39484:11: disconnected by user
Dec 05 09:24:15 compute-0 sshd-session[29968]: Disconnected from user zuul 38.129.56.31 port 39484
Dec 05 09:24:15 compute-0 sshd-session[29965]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:24:15 compute-0 systemd-logind[789]: Session 7 logged out. Waiting for processes to exit.
Dec 05 09:24:15 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 05 09:24:15 compute-0 systemd[1]: session-7.scope: Consumed 4.973s CPU time.
Dec 05 09:24:15 compute-0 systemd-logind[789]: Removed session 7.
Dec 05 09:33:24 compute-0 sshd-session[30891]: Accepted publickey for zuul from 192.168.122.30 port 60118 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:33:24 compute-0 systemd-logind[789]: New session 8 of user zuul.
Dec 05 09:33:24 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 05 09:33:24 compute-0 sshd-session[30891]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:33:25 compute-0 python3.9[31044]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:33:26 compute-0 sudo[31223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzynrdelwbjdtnbtdfdmywzigqluago ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927206.0189898-56-63372779677230/AnsiballZ_command.py'
Dec 05 09:33:26 compute-0 sudo[31223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:26 compute-0 python3.9[31225]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:33:38 compute-0 sudo[31223]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:38 compute-0 sshd-session[30894]: Connection closed by 192.168.122.30 port 60118
Dec 05 09:33:38 compute-0 sshd-session[30891]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:33:38 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 05 09:33:38 compute-0 systemd[1]: session-8.scope: Consumed 9.538s CPU time.
Dec 05 09:33:38 compute-0 systemd-logind[789]: Session 8 logged out. Waiting for processes to exit.
Dec 05 09:33:38 compute-0 systemd-logind[789]: Removed session 8.
Dec 05 09:33:54 compute-0 sshd-session[31282]: Accepted publickey for zuul from 192.168.122.30 port 55598 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:33:54 compute-0 systemd-logind[789]: New session 9 of user zuul.
Dec 05 09:33:54 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 05 09:33:54 compute-0 sshd-session[31282]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:33:55 compute-0 python3.9[31435]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 05 09:33:56 compute-0 python3.9[31609]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:33:57 compute-0 sudo[31759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kofuyxkwvvdfwrzlydnxtyjxitghgeuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927237.0771587-93-241172005567132/AnsiballZ_command.py'
Dec 05 09:33:57 compute-0 sudo[31759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:57 compute-0 python3.9[31761]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:33:57 compute-0 sudo[31759]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:58 compute-0 sudo[31912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwctutkckayaimmixbqfonagzeeqjnsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927238.1082342-129-241916396699748/AnsiballZ_stat.py'
Dec 05 09:33:58 compute-0 sudo[31912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:58 compute-0 python3.9[31914]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:33:58 compute-0 sudo[31912]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:59 compute-0 sudo[32064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pawswjrbzdiklhhqhjknvfualbfsocdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927238.9511054-153-213720799937938/AnsiballZ_file.py'
Dec 05 09:33:59 compute-0 sudo[32064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:59 compute-0 python3.9[32066]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:59 compute-0 sudo[32064]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:00 compute-0 sudo[32216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqojxsfmyaqzrputicwhukxbtxkualhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927239.9636555-177-104222457207831/AnsiballZ_stat.py'
Dec 05 09:34:00 compute-0 sudo[32216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:00 compute-0 python3.9[32218]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:34:00 compute-0 sudo[32216]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:00 compute-0 sudo[32339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibzyakdugtvynnbvdfcbjcmfyrpoxpyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927239.9636555-177-104222457207831/AnsiballZ_copy.py'
Dec 05 09:34:00 compute-0 sudo[32339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:01 compute-0 python3.9[32341]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927239.9636555-177-104222457207831/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:01 compute-0 sudo[32339]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:01 compute-0 anacron[4305]: Job `cron.weekly' started
Dec 05 09:34:01 compute-0 anacron[4305]: Job `cron.weekly' terminated
Dec 05 09:34:01 compute-0 sudo[32493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccfjzatisqmwksegcousjycctubmvngd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927241.3538806-222-99083661069630/AnsiballZ_setup.py'
Dec 05 09:34:01 compute-0 sudo[32493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:01 compute-0 python3.9[32495]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:34:02 compute-0 sudo[32493]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:02 compute-0 sudo[32649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnxrxrndepeleyczzjzrbpzuungetbng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927242.6019945-246-102472027991854/AnsiballZ_file.py'
Dec 05 09:34:02 compute-0 sudo[32649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:03 compute-0 python3.9[32651]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:34:03 compute-0 sudo[32649]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:03 compute-0 sudo[32801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsssmmgwfrpntucabkqknkcimaxtnati ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927243.503652-273-140549999081239/AnsiballZ_file.py'
Dec 05 09:34:03 compute-0 sudo[32801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:03 compute-0 python3.9[32803]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:34:03 compute-0 sudo[32801]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:04 compute-0 python3.9[32953]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:34:09 compute-0 python3.9[33206]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:10 compute-0 python3.9[33356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:34:11 compute-0 python3.9[33510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:34:12 compute-0 sudo[33666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jljmxicqvbpdtqiwasjxqrwhmookqkub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927252.0163343-417-49883928069281/AnsiballZ_setup.py'
Dec 05 09:34:12 compute-0 sudo[33666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:12 compute-0 python3.9[33668]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:34:12 compute-0 sudo[33666]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:13 compute-0 sudo[33750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmgvaupjttcmjxwsqiotqzvtnpqhznbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927252.0163343-417-49883928069281/AnsiballZ_dnf.py'
Dec 05 09:34:13 compute-0 sudo[33750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:13 compute-0 python3.9[33752]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:35:07 compute-0 systemd[1]: Reloading.
Dec 05 09:35:07 compute-0 systemd-rc-local-generator[33943]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:35:07 compute-0 systemd[1]: Starting dnf makecache...
Dec 05 09:35:07 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 05 09:35:07 compute-0 dnf[33960]: Failed determining last makecache time.
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-openstack-barbican-42b4c41831408a8e323 127 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 132 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-openstack-cinder-1c00d6490d88e436f26ef 120 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 systemd[1]: Reloading.
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-python-stevedore-c4acc5639fd2329372142 153 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-python-cloudkitty-tests-tempest-2c80f8 128 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 systemd-rc-local-generator[33996]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 126 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 128 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-python-designate-tests-tempest-347fdbc 137 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-openstack-glance-1fd12c29b339f30fe823e 145 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 156 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-openstack-manila-3c01b7181572c95dac462 157 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-python-whitebox-neutron-tests-tempest- 159 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 05 09:35:07 compute-0 dnf[33960]: delorean-openstack-octavia-ba397f07a7331190208c 136 kB/s | 3.0 kB     00:00
Dec 05 09:35:07 compute-0 systemd[1]: Reloading.
Dec 05 09:35:07 compute-0 systemd-rc-local-generator[34043]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:35:08 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 05 09:35:08 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Dec 05 09:35:08 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Dec 05 09:35:08 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Dec 05 09:35:08 compute-0 dnf[33960]: delorean-openstack-watcher-c014f81a8647287f6dcc 5.6 kB/s | 3.0 kB     00:00
Dec 05 09:35:08 compute-0 dnf[33960]: delorean-ansible-config_template-5ccaa22121a7ff 158 kB/s | 3.0 kB     00:00
Dec 05 09:35:08 compute-0 dnf[33960]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 129 kB/s | 3.0 kB     00:00
Dec 05 09:35:08 compute-0 dnf[33960]: delorean-openstack-swift-dc98a8463506ac520c469a 129 kB/s | 3.0 kB     00:00
Dec 05 09:35:08 compute-0 dnf[33960]: delorean-python-tempestconf-8515371b7cceebd4282 125 kB/s | 3.0 kB     00:00
Dec 05 09:35:08 compute-0 dnf[33960]: delorean-openstack-heat-ui-013accbfd179753bc3f0 165 kB/s | 3.0 kB     00:00
Dec 05 09:35:08 compute-0 dnf[33960]: CentOS Stream 9 - BaseOS                         30 kB/s | 7.3 kB     00:00
Dec 05 09:35:09 compute-0 dnf[33960]: CentOS Stream 9 - AppStream                      32 kB/s | 7.4 kB     00:00
Dec 05 09:35:09 compute-0 dnf[33960]: CentOS Stream 9 - CRB                            32 kB/s | 7.2 kB     00:00
Dec 05 09:35:09 compute-0 dnf[33960]: CentOS Stream 9 - Extras packages                71 kB/s | 8.3 kB     00:00
Dec 05 09:35:09 compute-0 dnf[33960]: dlrn-antelope-testing                           150 kB/s | 3.0 kB     00:00
Dec 05 09:35:09 compute-0 dnf[33960]: dlrn-antelope-build-deps                        152 kB/s | 3.0 kB     00:00
Dec 05 09:35:09 compute-0 dnf[33960]: centos9-rabbitmq                                143 kB/s | 3.0 kB     00:00
Dec 05 09:35:09 compute-0 dnf[33960]: centos9-storage                                 139 kB/s | 3.0 kB     00:00
Dec 05 09:35:09 compute-0 dnf[33960]: centos9-opstools                                144 kB/s | 3.0 kB     00:00
Dec 05 09:35:09 compute-0 dnf[33960]: NFV SIG OpenvSwitch                             150 kB/s | 3.0 kB     00:00
Dec 05 09:35:09 compute-0 dnf[33960]: repo-setup-centos-appstream                     158 kB/s | 4.4 kB     00:00
Dec 05 09:35:09 compute-0 dnf[33960]: repo-setup-centos-baseos                         93 kB/s | 3.9 kB     00:00
Dec 05 09:35:10 compute-0 dnf[33960]: repo-setup-centos-highavailability              158 kB/s | 3.9 kB     00:00
Dec 05 09:35:10 compute-0 dnf[33960]: repo-setup-centos-powertools                    197 kB/s | 4.3 kB     00:00
Dec 05 09:35:10 compute-0 dnf[33960]: Extra Packages for Enterprise Linux 9 - x86_64  101 kB/s |  30 kB     00:00
Dec 05 09:35:11 compute-0 dnf[33960]: Metadata cache created.
Dec 05 09:35:11 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 05 09:35:11 compute-0 systemd[1]: Finished dnf makecache.
Dec 05 09:35:11 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.905s CPU time.
Dec 05 09:36:30 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Dec 05 09:36:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:36:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 09:36:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:36:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:36:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:36:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:36:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:36:30 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 05 09:36:30 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:36:30 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 09:36:30 compute-0 systemd[1]: Reloading.
Dec 05 09:36:30 compute-0 systemd-rc-local-generator[34425]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:36:30 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 09:36:31 compute-0 sudo[33750]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:31 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 09:36:31 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 09:36:31 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.272s CPU time.
Dec 05 09:36:31 compute-0 systemd[1]: run-r02b1bafeea504edd9be03e23bdbffebc.service: Deactivated successfully.
Dec 05 09:36:40 compute-0 sudo[35337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvkgkrajtrrujfwlijwdwgzqixztlfap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927399.956398-453-66485802334152/AnsiballZ_command.py'
Dec 05 09:36:40 compute-0 sudo[35337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:40 compute-0 python3.9[35339]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:36:41 compute-0 sudo[35337]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:42 compute-0 sudo[35618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovfgsumxngbhphloeqmclwlcxxrxnvmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927401.9958696-477-62828585835406/AnsiballZ_selinux.py'
Dec 05 09:36:42 compute-0 sudo[35618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:42 compute-0 python3.9[35620]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 05 09:36:42 compute-0 sudo[35618]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:43 compute-0 sudo[35770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfjogmkyjrlnhnusuoqgmfobvjebsncr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927403.4572465-510-235917440647766/AnsiballZ_command.py'
Dec 05 09:36:43 compute-0 sudo[35770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:43 compute-0 python3.9[35772]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 05 09:36:47 compute-0 sudo[35770]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:48 compute-0 sudo[35924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqnvueaznywrtjywkpatebabklitibml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927408.2406073-534-254144614782747/AnsiballZ_file.py'
Dec 05 09:36:48 compute-0 sudo[35924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:53 compute-0 python3.9[35926]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:36:53 compute-0 sudo[35924]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:54 compute-0 sudo[36076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuslxombuwxokzertelcrrfcnrexwvuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927413.700476-558-259397078108765/AnsiballZ_mount.py'
Dec 05 09:36:54 compute-0 sudo[36076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:54 compute-0 python3.9[36078]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 05 09:36:54 compute-0 sudo[36076]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:57 compute-0 sudo[36228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luczoofkakshrxnysnwyzrbsubfbrciz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927417.5346527-642-252440781993023/AnsiballZ_file.py'
Dec 05 09:36:57 compute-0 sudo[36228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:00 compute-0 python3.9[36230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:37:00 compute-0 sudo[36228]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:00 compute-0 sudo[36380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnpulppzeyadymiysntcncmedkiyxhsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927420.1811724-666-61964080481095/AnsiballZ_stat.py'
Dec 05 09:37:00 compute-0 sudo[36380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:00 compute-0 python3.9[36382]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:00 compute-0 sudo[36380]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:00 compute-0 sudo[36503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyalafonvmwimceqqmdnhwkpprmdcbtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927420.1811724-666-61964080481095/AnsiballZ_copy.py'
Dec 05 09:37:00 compute-0 sudo[36503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:01 compute-0 python3.9[36505]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927420.1811724-666-61964080481095/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=707b529c46d00ae67cf5e28b4fee780ec58089b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:01 compute-0 sudo[36503]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:02 compute-0 sudo[36655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcsktbhmenbujermywzxkfsrcgyonwfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927422.0583594-738-249758757184521/AnsiballZ_stat.py'
Dec 05 09:37:02 compute-0 sudo[36655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:02 compute-0 python3.9[36657]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:37:02 compute-0 sudo[36655]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:03 compute-0 sudo[36807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daxgxdvrkqirwfahjdrngoydumfksver ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927422.8223603-762-186227378396021/AnsiballZ_command.py'
Dec 05 09:37:03 compute-0 sudo[36807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:03 compute-0 python3.9[36809]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:37:03 compute-0 sudo[36807]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:03 compute-0 sudo[36960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjvgnbymrylipvseyupbzjvngtzgyqvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927423.6554015-786-32739987093142/AnsiballZ_file.py'
Dec 05 09:37:03 compute-0 sudo[36960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:04 compute-0 python3.9[36962]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:04 compute-0 sudo[36960]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:05 compute-0 sudo[37112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkcbqmdqsbamozerbavxlawgkgmtaffq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927424.7300258-819-157862840103624/AnsiballZ_getent.py'
Dec 05 09:37:05 compute-0 sudo[37112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:05 compute-0 python3.9[37114]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 05 09:37:05 compute-0 sudo[37112]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:06 compute-0 sudo[37265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prioionitvlpdfupkocescbdysehamdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927425.6104171-843-14358271873246/AnsiballZ_group.py'
Dec 05 09:37:06 compute-0 sudo[37265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:06 compute-0 python3.9[37267]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 09:37:06 compute-0 groupadd[37268]: group added to /etc/group: name=qemu, GID=107
Dec 05 09:37:06 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:37:06 compute-0 groupadd[37268]: group added to /etc/gshadow: name=qemu
Dec 05 09:37:06 compute-0 groupadd[37268]: new group: name=qemu, GID=107
Dec 05 09:37:06 compute-0 sudo[37265]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:07 compute-0 sudo[37424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcwwagjryhkajoqdfkcqqlpgrmdqajoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927426.9207835-867-172221505752643/AnsiballZ_user.py'
Dec 05 09:37:07 compute-0 sudo[37424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:07 compute-0 python3.9[37426]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 09:37:07 compute-0 useradd[37428]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 05 09:37:07 compute-0 sudo[37424]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:08 compute-0 sudo[37584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbxbggwphgizqqjwegcxmrcsazfmhfby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927427.9880965-891-47113533195148/AnsiballZ_getent.py'
Dec 05 09:37:08 compute-0 sudo[37584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:08 compute-0 python3.9[37586]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 05 09:37:08 compute-0 sudo[37584]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:09 compute-0 sudo[37737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emnypvulgfuelqjbocutgxvskoveppek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927428.8024027-915-18927134624210/AnsiballZ_group.py'
Dec 05 09:37:09 compute-0 sudo[37737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:09 compute-0 python3.9[37739]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 09:37:09 compute-0 groupadd[37740]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 05 09:37:09 compute-0 groupadd[37740]: group added to /etc/gshadow: name=hugetlbfs
Dec 05 09:37:09 compute-0 groupadd[37740]: new group: name=hugetlbfs, GID=42477
Dec 05 09:37:09 compute-0 sudo[37737]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:09 compute-0 sudo[37895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naslxhsjfpypfunwteuvevrimlbwhyas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927429.6829174-942-98793988187117/AnsiballZ_file.py'
Dec 05 09:37:09 compute-0 sudo[37895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:10 compute-0 python3.9[37897]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 05 09:37:10 compute-0 sudo[37895]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:10 compute-0 sudo[38047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tateivwtpbhmvdrkovbxnnxslyqvjnjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927430.6355064-975-208367773766890/AnsiballZ_dnf.py'
Dec 05 09:37:10 compute-0 sudo[38047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:11 compute-0 python3.9[38049]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:37:15 compute-0 sudo[38047]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:15 compute-0 sudo[38200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlkjknzbdnpkpmkgmvmurzqljblsyrev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927435.348121-999-200666003150826/AnsiballZ_file.py'
Dec 05 09:37:15 compute-0 sudo[38200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:15 compute-0 python3.9[38202]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:37:15 compute-0 sudo[38200]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:16 compute-0 sudo[38352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfdbipmjpbrmlbwflavntncvelpwpywx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927436.0378647-1023-80959033550661/AnsiballZ_stat.py'
Dec 05 09:37:16 compute-0 sudo[38352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:16 compute-0 python3.9[38354]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:16 compute-0 sudo[38352]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:16 compute-0 sudo[38475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gswxogpkynkzpdxmtehzbcitcnhdjanl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927436.0378647-1023-80959033550661/AnsiballZ_copy.py'
Dec 05 09:37:16 compute-0 sudo[38475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:17 compute-0 python3.9[38477]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927436.0378647-1023-80959033550661/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:37:17 compute-0 sudo[38475]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:17 compute-0 sudo[38627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgppbvjwirvykjvdinjuugqxqgobqnvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927437.363892-1068-263923752563165/AnsiballZ_systemd.py'
Dec 05 09:37:17 compute-0 sudo[38627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:18 compute-0 python3.9[38629]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:37:18 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 05 09:37:18 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 05 09:37:18 compute-0 kernel: Bridge firewalling registered
Dec 05 09:37:18 compute-0 systemd-modules-load[38633]: Inserted module 'br_netfilter'
Dec 05 09:37:18 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 05 09:37:18 compute-0 sudo[38627]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:19 compute-0 sudo[38786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zspnrilblqpkcxbfksijtjhdbuewosww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927439.138327-1092-129523591334698/AnsiballZ_stat.py'
Dec 05 09:37:19 compute-0 sudo[38786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:19 compute-0 python3.9[38788]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:19 compute-0 sudo[38786]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:19 compute-0 sudo[38909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upffkznrfhiejmqbadaaaohmfohzxlrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927439.138327-1092-129523591334698/AnsiballZ_copy.py'
Dec 05 09:37:19 compute-0 sudo[38909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:20 compute-0 python3.9[38911]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927439.138327-1092-129523591334698/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:37:20 compute-0 sudo[38909]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:21 compute-0 sudo[39061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hepnxajajnborrigabjdmilarlcdwphu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927440.7192202-1146-52940092001398/AnsiballZ_dnf.py'
Dec 05 09:37:21 compute-0 sudo[39061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:21 compute-0 python3.9[39063]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:37:24 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Dec 05 09:37:24 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Dec 05 09:37:24 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:37:24 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 09:37:24 compute-0 systemd[1]: Reloading.
Dec 05 09:37:24 compute-0 systemd-rc-local-generator[39120]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:37:25 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 09:37:25 compute-0 sudo[39061]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:27 compute-0 python3.9[41722]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:37:28 compute-0 python3.9[42894]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 05 09:37:28 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 09:37:28 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 09:37:28 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.328s CPU time.
Dec 05 09:37:28 compute-0 systemd[1]: run-r05f0ed1da0d34da38ff4ef12b8b6db4a.service: Deactivated successfully.
Dec 05 09:37:28 compute-0 python3.9[43076]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:37:29 compute-0 sudo[43226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eechcgqplvcpmrjprknxkvoebmnmpezd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927449.487365-1263-94607675206164/AnsiballZ_command.py'
Dec 05 09:37:29 compute-0 sudo[43226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:29 compute-0 python3.9[43228]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:37:30 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 05 09:37:30 compute-0 systemd[1]: Starting Authorization Manager...
Dec 05 09:37:30 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 05 09:37:30 compute-0 polkitd[43445]: Started polkitd version 0.117
Dec 05 09:37:30 compute-0 polkitd[43445]: Loading rules from directory /etc/polkit-1/rules.d
Dec 05 09:37:30 compute-0 polkitd[43445]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 05 09:37:30 compute-0 polkitd[43445]: Finished loading, compiling and executing 2 rules
Dec 05 09:37:30 compute-0 systemd[1]: Started Authorization Manager.
Dec 05 09:37:30 compute-0 polkitd[43445]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 05 09:37:30 compute-0 sudo[43226]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:31 compute-0 sudo[43613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npliitkuqaatlbhkdqlqxjhqgcxgbmbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927451.1242683-1290-94417983616524/AnsiballZ_systemd.py'
Dec 05 09:37:31 compute-0 sudo[43613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:31 compute-0 python3.9[43615]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:37:31 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 05 09:37:31 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 05 09:37:31 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 05 09:37:31 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 05 09:37:32 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 05 09:37:32 compute-0 sudo[43613]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:32 compute-0 python3.9[43777]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 05 09:37:37 compute-0 sudo[43927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkekiwjaqvaxnvfrekgtmqyrsqpwzpbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927457.0537388-1461-74130970519545/AnsiballZ_systemd.py'
Dec 05 09:37:37 compute-0 sudo[43927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:37 compute-0 python3.9[43929]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:37:37 compute-0 systemd[1]: Reloading.
Dec 05 09:37:37 compute-0 systemd-rc-local-generator[43958]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:37:37 compute-0 sudo[43927]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:38 compute-0 sudo[44117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaaewnwqckhmfsqyspzmubkvsufyngsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927458.115181-1461-175762437400531/AnsiballZ_systemd.py'
Dec 05 09:37:38 compute-0 sudo[44117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:38 compute-0 python3.9[44119]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:37:38 compute-0 systemd[1]: Reloading.
Dec 05 09:37:38 compute-0 systemd-rc-local-generator[44149]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:37:39 compute-0 sudo[44117]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:39 compute-0 sudo[44306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpviqglredvcsxrweoryhkwuaekblgly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927459.5523853-1509-62404882497606/AnsiballZ_command.py'
Dec 05 09:37:39 compute-0 sudo[44306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:40 compute-0 python3.9[44308]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:37:40 compute-0 sudo[44306]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:40 compute-0 sudo[44459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkowqtuncnveomshkinvhsaxxfyhvvom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927460.3247511-1533-12368066764006/AnsiballZ_command.py'
Dec 05 09:37:40 compute-0 sudo[44459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:40 compute-0 python3.9[44461]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:37:40 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 05 09:37:40 compute-0 sudo[44459]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:41 compute-0 sudo[44612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acrjspbwjwevazblbutwnkzgcvcpofpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927461.1229005-1557-275741851618665/AnsiballZ_command.py'
Dec 05 09:37:41 compute-0 sudo[44612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:41 compute-0 python3.9[44614]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:37:43 compute-0 sudo[44612]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:43 compute-0 sudo[44774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gokbvzrsgajvavjhqmszneagyjpjjgnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927463.4580033-1581-66070832116116/AnsiballZ_command.py'
Dec 05 09:37:43 compute-0 sudo[44774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:43 compute-0 python3.9[44776]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:37:43 compute-0 sudo[44774]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:44 compute-0 sudo[44927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtxxvgswvrvmnmptbutsecmmlsrjowbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927464.1658556-1605-75094587855150/AnsiballZ_systemd.py'
Dec 05 09:37:44 compute-0 sudo[44927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:44 compute-0 python3.9[44929]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:37:44 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 05 09:37:44 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 05 09:37:44 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 05 09:37:44 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 05 09:37:44 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 05 09:37:44 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 05 09:37:44 compute-0 sudo[44927]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:45 compute-0 sshd-session[31285]: Connection closed by 192.168.122.30 port 55598
Dec 05 09:37:45 compute-0 sshd-session[31282]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:37:45 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 05 09:37:45 compute-0 systemd[1]: session-9.scope: Consumed 2min 29.335s CPU time.
Dec 05 09:37:45 compute-0 systemd-logind[789]: Session 9 logged out. Waiting for processes to exit.
Dec 05 09:37:45 compute-0 systemd-logind[789]: Removed session 9.
Dec 05 09:37:51 compute-0 sshd-session[44960]: Accepted publickey for zuul from 192.168.122.30 port 54148 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:37:51 compute-0 systemd-logind[789]: New session 10 of user zuul.
Dec 05 09:37:51 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 05 09:37:51 compute-0 sshd-session[44960]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:37:52 compute-0 python3.9[45113]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:37:53 compute-0 sudo[45267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skwfluoitviaaevgkqqyvgswqkqaamrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927473.0957155-68-26243907761239/AnsiballZ_getent.py'
Dec 05 09:37:53 compute-0 sudo[45267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:53 compute-0 python3.9[45269]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 05 09:37:53 compute-0 sudo[45267]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:54 compute-0 sudo[45420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbciimfrsgzpyhzqzsstbbiheowntsas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927474.028406-92-62723855175047/AnsiballZ_group.py'
Dec 05 09:37:54 compute-0 sudo[45420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:54 compute-0 python3.9[45422]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 09:37:54 compute-0 groupadd[45423]: group added to /etc/group: name=openvswitch, GID=42476
Dec 05 09:37:54 compute-0 groupadd[45423]: group added to /etc/gshadow: name=openvswitch
Dec 05 09:37:54 compute-0 groupadd[45423]: new group: name=openvswitch, GID=42476
Dec 05 09:37:54 compute-0 sudo[45420]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:55 compute-0 sudo[45578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdstrzlgiuwimswrzdemucnesvbwaing ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927474.9581983-116-13359296984327/AnsiballZ_user.py'
Dec 05 09:37:55 compute-0 sudo[45578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:55 compute-0 python3.9[45580]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 09:37:55 compute-0 useradd[45582]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 05 09:37:55 compute-0 useradd[45582]: add 'openvswitch' to group 'hugetlbfs'
Dec 05 09:37:55 compute-0 useradd[45582]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 05 09:37:55 compute-0 sudo[45578]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:56 compute-0 sudo[45738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhroaapzgkqjrucotpfwplncirtvcbho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927476.123476-146-105140167440928/AnsiballZ_setup.py'
Dec 05 09:37:56 compute-0 sudo[45738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:56 compute-0 python3.9[45740]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:37:56 compute-0 sudo[45738]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:57 compute-0 sudo[45822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjnetpiwrmysxrennadjjaynhubgjizd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927476.123476-146-105140167440928/AnsiballZ_dnf.py'
Dec 05 09:37:57 compute-0 sudo[45822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:57 compute-0 python3.9[45824]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:38:01 compute-0 sudo[45822]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:03 compute-0 sudo[45985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijxlaioecbtzeefghhlxzfdhegqrukeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927483.230376-188-57363599636900/AnsiballZ_dnf.py'
Dec 05 09:38:03 compute-0 sudo[45985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:03 compute-0 python3.9[45987]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:38:17 compute-0 kernel: SELinux:  Converting 2730 SID table entries...
Dec 05 09:38:17 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:38:17 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 09:38:17 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:38:17 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:38:17 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:38:17 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:38:17 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:38:17 compute-0 groupadd[46010]: group added to /etc/group: name=unbound, GID=993
Dec 05 09:38:17 compute-0 groupadd[46010]: group added to /etc/gshadow: name=unbound
Dec 05 09:38:17 compute-0 groupadd[46010]: new group: name=unbound, GID=993
Dec 05 09:38:18 compute-0 useradd[46017]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 05 09:38:18 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 05 09:38:18 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 05 09:38:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:38:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 09:38:22 compute-0 systemd[1]: Reloading.
Dec 05 09:38:22 compute-0 systemd-sysv-generator[46520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:38:22 compute-0 systemd-rc-local-generator[46511]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:38:22 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 09:38:23 compute-0 sudo[45985]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 09:38:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 09:38:23 compute-0 systemd[1]: run-r34a7df295d16419e8bbc971bbaa855f8.service: Deactivated successfully.
Dec 05 09:38:24 compute-0 sudo[47084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdvqopnrziiklrfaafljfkyyrkrckyrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927503.658656-212-233462908460897/AnsiballZ_systemd.py'
Dec 05 09:38:24 compute-0 sudo[47084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:24 compute-0 python3.9[47086]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:38:24 compute-0 systemd[1]: Reloading.
Dec 05 09:38:24 compute-0 systemd-rc-local-generator[47112]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:38:24 compute-0 systemd-sysv-generator[47119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:38:24 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 05 09:38:24 compute-0 chown[47127]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 05 09:38:25 compute-0 ovs-ctl[47132]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 05 09:38:25 compute-0 ovs-ctl[47132]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 05 09:38:25 compute-0 ovs-ctl[47132]: Starting ovsdb-server [  OK  ]
Dec 05 09:38:25 compute-0 ovs-vsctl[47181]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 05 09:38:25 compute-0 ovs-vsctl[47197]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"41643524-e4b6-4069-ba08-6e5872c74bd3\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 05 09:38:25 compute-0 ovs-ctl[47132]: Configuring Open vSwitch system IDs [  OK  ]
Dec 05 09:38:25 compute-0 ovs-ctl[47132]: Enabling remote OVSDB managers [  OK  ]
Dec 05 09:38:25 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 05 09:38:25 compute-0 ovs-vsctl[47206]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 05 09:38:25 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 05 09:38:25 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 05 09:38:25 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 05 09:38:25 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 05 09:38:25 compute-0 ovs-ctl[47251]: Inserting openvswitch module [  OK  ]
Dec 05 09:38:25 compute-0 ovs-ctl[47220]: Starting ovs-vswitchd [  OK  ]
Dec 05 09:38:25 compute-0 ovs-vsctl[47269]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 05 09:38:25 compute-0 ovs-ctl[47220]: Enabling remote OVSDB managers [  OK  ]
Dec 05 09:38:25 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 05 09:38:25 compute-0 systemd[1]: Starting Open vSwitch...
Dec 05 09:38:25 compute-0 systemd[1]: Finished Open vSwitch.
Dec 05 09:38:25 compute-0 sudo[47084]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:26 compute-0 python3.9[47421]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:38:27 compute-0 sudo[47571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxgmsxpdzbiutkpojsuqtalytlszxfkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927507.04986-266-187893569693758/AnsiballZ_sefcontext.py'
Dec 05 09:38:27 compute-0 sudo[47571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:27 compute-0 python3.9[47573]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 05 09:38:29 compute-0 kernel: SELinux:  Converting 2744 SID table entries...
Dec 05 09:38:29 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:38:29 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 09:38:29 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:38:29 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:38:29 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:38:29 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:38:29 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:38:29 compute-0 sudo[47571]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:30 compute-0 python3.9[47728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:38:31 compute-0 sudo[47884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdmtbpwrtetgmsrrdgsfnfibvshawvnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927510.8305106-320-271671213325388/AnsiballZ_dnf.py'
Dec 05 09:38:31 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 05 09:38:31 compute-0 sudo[47884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:31 compute-0 python3.9[47886]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:38:32 compute-0 sudo[47884]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:33 compute-0 sudo[48037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nudjbrvrtisgrsngimrpbkueppjzxyje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927513.4120896-344-211156601818969/AnsiballZ_command.py'
Dec 05 09:38:33 compute-0 sudo[48037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:34 compute-0 python3.9[48039]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:38:34 compute-0 sudo[48037]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:35 compute-0 sudo[48324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sseddbyyumocbrdcaqsormksleqjpewk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927515.101474-368-120846254647518/AnsiballZ_file.py'
Dec 05 09:38:35 compute-0 sudo[48324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:35 compute-0 python3.9[48326]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 09:38:35 compute-0 sudo[48324]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:36 compute-0 python3.9[48476]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:38:37 compute-0 sudo[48628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgsjtofivxljvpoxcnbsjcegwcpushlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927516.8164659-416-42807025688810/AnsiballZ_dnf.py'
Dec 05 09:38:37 compute-0 sudo[48628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:37 compute-0 python3.9[48630]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:38:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:38:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 09:38:39 compute-0 systemd[1]: Reloading.
Dec 05 09:38:39 compute-0 systemd-rc-local-generator[48670]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:38:39 compute-0 systemd-sysv-generator[48673]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:38:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 09:38:39 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 09:38:39 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 09:38:39 compute-0 systemd[1]: run-r65a7f322ea2546598390916a5d8110f1.service: Deactivated successfully.
Dec 05 09:38:39 compute-0 sudo[48628]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:40 compute-0 sudo[48945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxnkdzgsipbmdskurujpjfhuurizuven ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927520.5589209-440-244312131067016/AnsiballZ_systemd.py'
Dec 05 09:38:40 compute-0 sudo[48945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:41 compute-0 python3.9[48947]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:38:41 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 05 09:38:41 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 05 09:38:41 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 05 09:38:41 compute-0 systemd[1]: Stopping Network Manager...
Dec 05 09:38:41 compute-0 NetworkManager[7200]: <info>  [1764927521.2121] caught SIGTERM, shutting down normally.
Dec 05 09:38:41 compute-0 NetworkManager[7200]: <info>  [1764927521.2151] dhcp4 (eth0): canceled DHCP transaction
Dec 05 09:38:41 compute-0 NetworkManager[7200]: <info>  [1764927521.2152] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 09:38:41 compute-0 NetworkManager[7200]: <info>  [1764927521.2153] dhcp4 (eth0): state changed no lease
Dec 05 09:38:41 compute-0 NetworkManager[7200]: <info>  [1764927521.2160] manager: NetworkManager state is now CONNECTED_SITE
Dec 05 09:38:41 compute-0 NetworkManager[7200]: <info>  [1764927521.2266] exiting (success)
Dec 05 09:38:41 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 09:38:41 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 05 09:38:41 compute-0 systemd[1]: Stopped Network Manager.
Dec 05 09:38:41 compute-0 systemd[1]: NetworkManager.service: Consumed 14.306s CPU time, 4.0M memory peak, read 0B from disk, written 38.5K to disk.
Dec 05 09:38:41 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 09:38:41 compute-0 systemd[1]: Starting Network Manager...
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.2934] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:77fa800c-2983-4f5e-b315-57495a3fe27a)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.2935] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3003] manager[0x55bb39a19090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 05 09:38:41 compute-0 systemd[1]: Starting Hostname Service...
Dec 05 09:38:41 compute-0 systemd[1]: Started Hostname Service.
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3920] hostname: hostname: using hostnamed
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3920] hostname: static hostname changed from (none) to "compute-0"
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3926] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3931] manager[0x55bb39a19090]: rfkill: Wi-Fi hardware radio set enabled
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3931] manager[0x55bb39a19090]: rfkill: WWAN hardware radio set enabled
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3954] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3965] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3965] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3966] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3966] manager: Networking is enabled by state file
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3968] settings: Loaded settings plugin: keyfile (internal)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.3972] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4001] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4012] dhcp: init: Using DHCP client 'internal'
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4015] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4019] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4025] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4033] device (lo): Activation: starting connection 'lo' (f6d82822-cadb-414c-ae68-8f6717460373)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4040] device (eth0): carrier: link connected
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4045] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4050] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4050] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4057] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4063] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4067] device (eth1): carrier: link connected
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4070] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4073] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (d1d8c6ca-28b5-552c-8427-579a453c92d6) (indicated)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4073] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4076] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4081] device (eth1): Activation: starting connection 'ci-private-network' (d1d8c6ca-28b5-552c-8427-579a453c92d6)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4086] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 05 09:38:41 compute-0 systemd[1]: Started Network Manager.
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4091] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4093] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4094] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4095] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4097] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4099] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4101] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4104] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4108] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4110] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4122] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4138] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4467] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4473] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 05 09:38:41 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4650] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4655] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4659] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4665] device (lo): Activation: successful, device activated.
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4670] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4672] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4674] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4677] device (eth1): Activation: successful, device activated.
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4685] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4686] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4689] manager: NetworkManager state is now CONNECTED_SITE
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4692] device (eth0): Activation: successful, device activated.
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4696] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 05 09:38:41 compute-0 NetworkManager[48957]: <info>  [1764927521.4699] manager: startup complete
Dec 05 09:38:41 compute-0 sudo[48945]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:41 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 05 09:38:41 compute-0 sudo[49171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muspvoilffhtgtxxdbstfhqezgcfxjwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927521.6612096-464-134373205455696/AnsiballZ_dnf.py'
Dec 05 09:38:41 compute-0 sudo[49171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:42 compute-0 python3.9[49173]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:38:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:38:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 09:38:47 compute-0 systemd[1]: Reloading.
Dec 05 09:38:47 compute-0 systemd-sysv-generator[49224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:38:47 compute-0 systemd-rc-local-generator[49221]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:38:48 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 09:38:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 09:38:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 09:38:49 compute-0 systemd[1]: run-r8bcdcc604f114888ad9a75b1213da59a.service: Deactivated successfully.
Dec 05 09:38:49 compute-0 sudo[49171]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:51 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 09:38:52 compute-0 sudo[49631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gomwbhlxcfzjjlsojxmygtlwcffdrtsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927532.0096755-500-45394746321649/AnsiballZ_stat.py'
Dec 05 09:38:52 compute-0 sudo[49631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:52 compute-0 python3.9[49633]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:38:52 compute-0 sudo[49631]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:53 compute-0 sudo[49783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iphizdgvudnkzheeeftrlntwwmcmrczi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927532.773379-527-185186948591493/AnsiballZ_ini_file.py'
Dec 05 09:38:53 compute-0 sudo[49783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:53 compute-0 python3.9[49785]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:53 compute-0 sudo[49783]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:54 compute-0 sudo[49937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdxrdweiovzunogddqaexsbboqfvcksx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927533.7856188-557-176854704597049/AnsiballZ_ini_file.py'
Dec 05 09:38:54 compute-0 sudo[49937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:54 compute-0 python3.9[49939]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:54 compute-0 sudo[49937]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:54 compute-0 sudo[50089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prgmikdumdblvpemqfaygdwkkklmrjqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927534.4471507-557-53493042891550/AnsiballZ_ini_file.py'
Dec 05 09:38:54 compute-0 sudo[50089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:54 compute-0 python3.9[50091]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:54 compute-0 sudo[50089]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:55 compute-0 sudo[50241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfppqbftbbtmfcaictzdkbdsutxhbmca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927535.1020007-602-40366054506890/AnsiballZ_ini_file.py'
Dec 05 09:38:55 compute-0 sudo[50241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:55 compute-0 python3.9[50243]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:55 compute-0 sudo[50241]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:55 compute-0 sudo[50393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udwbgwrhpmpmrxprxtwaamksuvjcsnyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927535.7417858-602-172506697882848/AnsiballZ_ini_file.py'
Dec 05 09:38:55 compute-0 sudo[50393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:56 compute-0 python3.9[50395]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:56 compute-0 sudo[50393]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:56 compute-0 sudo[50545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgyqatkoqstfwwhraspuankuroamalud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927536.512243-647-182932672868294/AnsiballZ_stat.py'
Dec 05 09:38:56 compute-0 sudo[50545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:56 compute-0 python3.9[50547]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:38:56 compute-0 sudo[50545]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:57 compute-0 sudo[50668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcugdixxqlniscjtlboyhfxsruazudua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927536.512243-647-182932672868294/AnsiballZ_copy.py'
Dec 05 09:38:57 compute-0 sudo[50668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:57 compute-0 python3.9[50670]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927536.512243-647-182932672868294/.source _original_basename=.km20u20s follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:57 compute-0 sudo[50668]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:58 compute-0 sudo[50820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhspisdgrtajarpatqepdswgogzxsjjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927538.0611541-692-213511430587010/AnsiballZ_file.py'
Dec 05 09:38:58 compute-0 sudo[50820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:58 compute-0 python3.9[50822]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:58 compute-0 sudo[50820]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:59 compute-0 sudo[50972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymlkdokdsqmuestzlrkowrubtpdgrzko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927538.8378172-716-264993606350619/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 05 09:38:59 compute-0 sudo[50972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:59 compute-0 python3.9[50974]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 05 09:38:59 compute-0 sudo[50972]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:00 compute-0 sudo[51124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aspftzjambkqqfibtzbwullpbdoplyxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927539.820982-743-156147782011815/AnsiballZ_file.py'
Dec 05 09:39:00 compute-0 sudo[51124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:00 compute-0 python3.9[51126]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:39:00 compute-0 sudo[51124]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:01 compute-0 sudo[51276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytcuasvfkwsbzneddnquskqkncwcdylf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927540.900189-773-101515347461101/AnsiballZ_stat.py'
Dec 05 09:39:01 compute-0 sudo[51276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:01 compute-0 sudo[51276]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:01 compute-0 sudo[51399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwnmfcfsmrreeratgafjjwlyvzljzggm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927540.900189-773-101515347461101/AnsiballZ_copy.py'
Dec 05 09:39:01 compute-0 sudo[51399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:01 compute-0 sudo[51399]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:02 compute-0 sudo[51551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whpheuqyfuskddmlioyciiupnanithim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927542.1729975-818-103175875532284/AnsiballZ_slurp.py'
Dec 05 09:39:02 compute-0 sudo[51551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:02 compute-0 python3.9[51553]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 05 09:39:02 compute-0 sudo[51551]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:03 compute-0 sudo[51726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nenuqtwvnnrzskauzoinkpzwknqfyyln ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927543.1627066-845-138417619118257/async_wrapper.py j221150157845 300 /home/zuul/.ansible/tmp/ansible-tmp-1764927543.1627066-845-138417619118257/AnsiballZ_edpm_os_net_config.py _'
Dec 05 09:39:03 compute-0 sudo[51726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:03 compute-0 ansible-async_wrapper.py[51728]: Invoked with j221150157845 300 /home/zuul/.ansible/tmp/ansible-tmp-1764927543.1627066-845-138417619118257/AnsiballZ_edpm_os_net_config.py _
Dec 05 09:39:03 compute-0 ansible-async_wrapper.py[51731]: Starting module and watcher
Dec 05 09:39:03 compute-0 ansible-async_wrapper.py[51731]: Start watching 51732 (300)
Dec 05 09:39:03 compute-0 ansible-async_wrapper.py[51732]: Start module (51732)
Dec 05 09:39:03 compute-0 ansible-async_wrapper.py[51728]: Return async_wrapper task started.
Dec 05 09:39:04 compute-0 sudo[51726]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:04 compute-0 python3.9[51733]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 05 09:39:04 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 05 09:39:04 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 05 09:39:04 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 05 09:39:04 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 05 09:39:04 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1258] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1274] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1813] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1814] audit: op="connection-add" uuid="782f9971-bb6f-4d48-a822-c449f15173e1" name="br-ex-br" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1830] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1831] audit: op="connection-add" uuid="c513334a-ca55-498b-aa15-4279f327b964" name="br-ex-port" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1844] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1845] audit: op="connection-add" uuid="45d3e98f-7a42-4266-b500-f26b85afdd34" name="eth1-port" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1859] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1861] audit: op="connection-add" uuid="cb4223e2-88dd-43d8-aada-99b380ec1367" name="vlan20-port" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1875] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1877] audit: op="connection-add" uuid="2fca8faf-bfed-42a1-9260-2513c74cf37a" name="vlan21-port" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1889] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1891] audit: op="connection-add" uuid="62f37c9d-2439-4202-bd3b-7df07c128c6f" name="vlan22-port" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1904] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1905] audit: op="connection-add" uuid="6776d6e2-b81c-4501-987c-d069bd9fbd21" name="vlan23-port" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1929] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.timestamp,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1945] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1947] audit: op="connection-add" uuid="b4356b8a-f71a-406c-81d6-31da3815ed07" name="br-ex-if" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.1995] audit: op="connection-update" uuid="d1d8c6ca-28b5-552c-8427-579a453c92d6" name="ci-private-network" args="ovs-interface.type,ipv4.routing-rules,ipv4.dns,ipv4.routes,ipv4.method,ipv4.addresses,ipv4.never-default,ovs-external-ids.data,connection.slave-type,connection.timestamp,connection.controller,connection.port-type,connection.master,ipv6.routing-rules,ipv6.dns,ipv6.routes,ipv6.method,ipv6.addr-gen-mode,ipv6.addresses" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2010] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2011] audit: op="connection-add" uuid="3e3d1f99-344c-4fe3-bec1-651313f27b19" name="vlan20-if" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2025] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2026] audit: op="connection-add" uuid="26f5e226-ab73-4cde-ae60-301f18e3db6d" name="vlan21-if" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2041] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2042] audit: op="connection-add" uuid="a2a008f8-8045-4a60-9c2d-7f8a4028089d" name="vlan22-if" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2058] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2059] audit: op="connection-add" uuid="7dc7da79-7bb1-41e6-a37c-fa679d7df200" name="vlan23-if" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2069] audit: op="connection-delete" uuid="e7fb5895-bcdf-3b3c-8ddf-f78dbcafe155" name="Wired connection 1" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2085] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2093] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2096] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (782f9971-bb6f-4d48-a822-c449f15173e1)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2096] audit: op="connection-activate" uuid="782f9971-bb6f-4d48-a822-c449f15173e1" name="br-ex-br" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2097] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2102] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2105] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (c513334a-ca55-498b-aa15-4279f327b964)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2106] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2110] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2112] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (45d3e98f-7a42-4266-b500-f26b85afdd34)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2113] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2118] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2120] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (cb4223e2-88dd-43d8-aada-99b380ec1367)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2121] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2126] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2129] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (2fca8faf-bfed-42a1-9260-2513c74cf37a)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2130] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2134] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2137] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (62f37c9d-2439-4202-bd3b-7df07c128c6f)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2139] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2144] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2147] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (6776d6e2-b81c-4501-987c-d069bd9fbd21)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2147] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2149] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2150] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2155] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2159] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2162] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (b4356b8a-f71a-406c-81d6-31da3815ed07)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2163] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2165] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2167] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2168] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2168] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2177] device (eth1): disconnecting for new activation request.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2177] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2179] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2180] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2182] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2184] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2190] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2195] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (3e3d1f99-344c-4fe3-bec1-651313f27b19)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2196] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2199] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2201] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2202] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2205] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2209] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2213] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (26f5e226-ab73-4cde-ae60-301f18e3db6d)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2214] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2216] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2217] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2218] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2294] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2298] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2300] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (a2a008f8-8045-4a60-9c2d-7f8a4028089d)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2301] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2304] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2305] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2306] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2308] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2312] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2315] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (7dc7da79-7bb1-41e6-a37c-fa679d7df200)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2316] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2317] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2318] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2319] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2320] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2331] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2333] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2335] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2336] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2342] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2344] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2347] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2349] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2350] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2354] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2357] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2359] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2360] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2364] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2367] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2369] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2370] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2374] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2377] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2379] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2380] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2384] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2386] dhcp4 (eth0): canceled DHCP transaction
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2387] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2387] dhcp4 (eth0): state changed no lease
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2388] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 05 09:39:06 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2404] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2408] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51734 uid=0 result="fail" reason="Device is not activated"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2444] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 05 09:39:06 compute-0 systemd-udevd[51738]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 09:39:06 compute-0 kernel: Timeout policy base is empty
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2447] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2455] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2459] device (eth1): disconnecting for new activation request.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2459] audit: op="connection-activate" uuid="d1d8c6ca-28b5-552c-8427-579a453c92d6" name="ci-private-network" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2494] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2499] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2502] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51734 uid=0 result="success"
Dec 05 09:39:06 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 09:39:06 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2671] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2833] device (eth1): Activation: starting connection 'ci-private-network' (d1d8c6ca-28b5-552c-8427-579a453c92d6)
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2839] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2850] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2854] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2860] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2865] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2874] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2876] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2878] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2879] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2881] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2882] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2886] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2892] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2897] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2911] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2914] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2919] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2923] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2927] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2931] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2936] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2940] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2945] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2949] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2954] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.2959] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 kernel: br-ex: entered promiscuous mode
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3027] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3032] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3039] device (eth1): Activation: successful, device activated.
Dec 05 09:39:06 compute-0 kernel: vlan22: entered promiscuous mode
Dec 05 09:39:06 compute-0 systemd-udevd[51740]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 09:39:06 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3127] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3139] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3175] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3181] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3189] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 05 09:39:06 compute-0 kernel: vlan21: entered promiscuous mode
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3289] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3310] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3353] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3366] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 kernel: vlan20: entered promiscuous mode
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3390] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3399] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3407] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3427] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3429] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3436] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 05 09:39:06 compute-0 kernel: vlan23: entered promiscuous mode
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3482] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3496] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3530] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3531] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3537] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3591] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3605] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3635] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3637] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 09:39:06 compute-0 NetworkManager[48957]: <info>  [1764927546.3644] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 05 09:39:07 compute-0 NetworkManager[48957]: <info>  [1764927547.4807] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51734 uid=0 result="success"
Dec 05 09:39:07 compute-0 sudo[52089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miltwywdtpuqiitjbqcqirltirfigori ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927547.1725447-845-27407806962704/AnsiballZ_async_status.py'
Dec 05 09:39:07 compute-0 sudo[52089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:07 compute-0 NetworkManager[48957]: <info>  [1764927547.6861] checkpoint[0x55bb399ef950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 05 09:39:07 compute-0 NetworkManager[48957]: <info>  [1764927547.6866] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51734 uid=0 result="success"
Dec 05 09:39:07 compute-0 python3.9[52091]: ansible-ansible.legacy.async_status Invoked with jid=j221150157845.51728 mode=status _async_dir=/root/.ansible_async
Dec 05 09:39:07 compute-0 sudo[52089]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:07 compute-0 NetworkManager[48957]: <info>  [1764927547.9703] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51734 uid=0 result="success"
Dec 05 09:39:07 compute-0 NetworkManager[48957]: <info>  [1764927547.9713] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51734 uid=0 result="success"
Dec 05 09:39:08 compute-0 NetworkManager[48957]: <info>  [1764927548.1912] audit: op="networking-control" arg="global-dns-configuration" pid=51734 uid=0 result="success"
Dec 05 09:39:08 compute-0 NetworkManager[48957]: <info>  [1764927548.1947] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 05 09:39:08 compute-0 NetworkManager[48957]: <info>  [1764927548.1984] audit: op="networking-control" arg="global-dns-configuration" pid=51734 uid=0 result="success"
Dec 05 09:39:08 compute-0 NetworkManager[48957]: <info>  [1764927548.2011] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51734 uid=0 result="success"
Dec 05 09:39:08 compute-0 NetworkManager[48957]: <info>  [1764927548.3527] checkpoint[0x55bb399efa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 05 09:39:08 compute-0 NetworkManager[48957]: <info>  [1764927548.3533] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51734 uid=0 result="success"
Dec 05 09:39:08 compute-0 ansible-async_wrapper.py[51732]: Module complete (51732)
Dec 05 09:39:09 compute-0 ansible-async_wrapper.py[51731]: Done in kid B.
Dec 05 09:39:11 compute-0 sudo[52195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvbwdyjgxrsdosektvhesajkhdaxlpxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927547.1725447-845-27407806962704/AnsiballZ_async_status.py'
Dec 05 09:39:11 compute-0 sudo[52195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:11 compute-0 python3.9[52197]: ansible-ansible.legacy.async_status Invoked with jid=j221150157845.51728 mode=status _async_dir=/root/.ansible_async
Dec 05 09:39:11 compute-0 sudo[52195]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:11 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 09:39:11 compute-0 sudo[52297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmozmvopsjwrhwsjuckzyhllguayesja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927547.1725447-845-27407806962704/AnsiballZ_async_status.py'
Dec 05 09:39:11 compute-0 sudo[52297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:11 compute-0 python3.9[52299]: ansible-ansible.legacy.async_status Invoked with jid=j221150157845.51728 mode=cleanup _async_dir=/root/.ansible_async
Dec 05 09:39:11 compute-0 sudo[52297]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:12 compute-0 sudo[52449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuyqrohfpxrwtrlflrlcsrcxrsbkbsrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927552.183126-926-209980581186271/AnsiballZ_stat.py'
Dec 05 09:39:12 compute-0 sudo[52449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:12 compute-0 python3.9[52451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:39:12 compute-0 sudo[52449]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:12 compute-0 sudo[52572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szpndnglqpdcabtpfbcrtgokvypakmhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927552.183126-926-209980581186271/AnsiballZ_copy.py'
Dec 05 09:39:12 compute-0 sudo[52572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:13 compute-0 python3.9[52574]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927552.183126-926-209980581186271/.source.returncode _original_basename=.qv8j9kqc follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:39:13 compute-0 sudo[52572]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:13 compute-0 sudo[52724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsyiowqrolbwjpwatqwgmyqbrsdxfcvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927553.5498002-974-193338938560599/AnsiballZ_stat.py'
Dec 05 09:39:13 compute-0 sudo[52724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:13 compute-0 python3.9[52726]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:39:14 compute-0 sudo[52724]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:14 compute-0 sudo[52847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaiygbwgarxccmuevmuttcudrkimmypz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927553.5498002-974-193338938560599/AnsiballZ_copy.py'
Dec 05 09:39:14 compute-0 sudo[52847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:14 compute-0 python3.9[52849]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927553.5498002-974-193338938560599/.source.cfg _original_basename=.9hr7z9lx follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:39:14 compute-0 sudo[52847]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:15 compute-0 sudo[53000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akboaizgnyhkfwhphscpiaciiymcwqfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927554.833305-1019-266011239739809/AnsiballZ_systemd.py'
Dec 05 09:39:15 compute-0 sudo[53000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:15 compute-0 python3.9[53002]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:39:15 compute-0 systemd[1]: Reloading Network Manager...
Dec 05 09:39:15 compute-0 NetworkManager[48957]: <info>  [1764927555.4673] audit: op="reload" arg="0" pid=53006 uid=0 result="success"
Dec 05 09:39:15 compute-0 NetworkManager[48957]: <info>  [1764927555.4685] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 05 09:39:15 compute-0 systemd[1]: Reloaded Network Manager.
Dec 05 09:39:15 compute-0 sudo[53000]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:15 compute-0 sshd-session[44963]: Connection closed by 192.168.122.30 port 54148
Dec 05 09:39:15 compute-0 sshd-session[44960]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:39:15 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 05 09:39:15 compute-0 systemd[1]: session-10.scope: Consumed 54.484s CPU time.
Dec 05 09:39:15 compute-0 systemd-logind[789]: Session 10 logged out. Waiting for processes to exit.
Dec 05 09:39:15 compute-0 systemd-logind[789]: Removed session 10.
Dec 05 09:39:20 compute-0 sshd-session[53036]: Accepted publickey for zuul from 192.168.122.30 port 35642 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:39:21 compute-0 systemd-logind[789]: New session 11 of user zuul.
Dec 05 09:39:21 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 05 09:39:21 compute-0 sshd-session[53036]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:39:22 compute-0 python3.9[53190]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:39:23 compute-0 python3.9[53344]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:39:24 compute-0 python3.9[53538]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:39:25 compute-0 sshd-session[53040]: Connection closed by 192.168.122.30 port 35642
Dec 05 09:39:25 compute-0 sshd-session[53036]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:39:25 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 05 09:39:25 compute-0 systemd[1]: session-11.scope: Consumed 2.384s CPU time.
Dec 05 09:39:25 compute-0 systemd-logind[789]: Session 11 logged out. Waiting for processes to exit.
Dec 05 09:39:25 compute-0 systemd-logind[789]: Removed session 11.
Dec 05 09:39:25 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 09:39:30 compute-0 sshd-session[53566]: Accepted publickey for zuul from 192.168.122.30 port 58686 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:39:30 compute-0 systemd-logind[789]: New session 12 of user zuul.
Dec 05 09:39:30 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 05 09:39:30 compute-0 sshd-session[53566]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:39:31 compute-0 python3.9[53720]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:39:32 compute-0 python3.9[53874]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:39:33 compute-0 sudo[54028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akwoxwebkhcsxoczxhufgyislafphyxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927572.8179762-80-205225644072750/AnsiballZ_setup.py'
Dec 05 09:39:33 compute-0 sudo[54028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:33 compute-0 python3.9[54030]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:39:33 compute-0 sudo[54028]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:34 compute-0 sudo[54113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxlrgxpbduqyaohbkyyfnxkskuxnirnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927572.8179762-80-205225644072750/AnsiballZ_dnf.py'
Dec 05 09:39:34 compute-0 sudo[54113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:34 compute-0 python3.9[54115]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:39:35 compute-0 sudo[54113]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:37 compute-0 sudo[54267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzfbtiupixeaprazxvmqqbujxzvkodos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927576.7638779-116-178825902581418/AnsiballZ_setup.py'
Dec 05 09:39:37 compute-0 sudo[54267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:37 compute-0 python3.9[54269]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:39:37 compute-0 sudo[54267]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:38 compute-0 sudo[54462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clpvbcyviocwzlvwdbxgzvcpnjzcztut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927578.1142085-149-253517193798520/AnsiballZ_file.py'
Dec 05 09:39:38 compute-0 sudo[54462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:38 compute-0 python3.9[54464]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:39:38 compute-0 sudo[54462]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:39 compute-0 sudo[54614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqkzffzxcazkrsvguvthxgmtueqxssmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927578.924049-173-122211460369216/AnsiballZ_command.py'
Dec 05 09:39:39 compute-0 sudo[54614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:39 compute-0 python3.9[54616]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:39:39 compute-0 podman[54617]: 2025-12-05 09:39:39.659374069 +0000 UTC m=+0.066157391 system refresh
Dec 05 09:39:39 compute-0 sudo[54614]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:40 compute-0 sudo[54776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biecvmiegmuffpagkdygyuqwqufjaeoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927579.9513667-197-25880322074214/AnsiballZ_stat.py'
Dec 05 09:39:40 compute-0 sudo[54776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:40 compute-0 python3.9[54778]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:39:40 compute-0 sudo[54776]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:39:41 compute-0 sudo[54899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwlfukpukctfwbxlecsyskkrasjoxkzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927579.9513667-197-25880322074214/AnsiballZ_copy.py'
Dec 05 09:39:41 compute-0 sudo[54899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:41 compute-0 python3.9[54901]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927579.9513667-197-25880322074214/.source.json follow=False _original_basename=podman_network_config.j2 checksum=8274c763978835cfcb695734f7eacac294331949 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:39:41 compute-0 sudo[54899]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:41 compute-0 sudo[55051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcpfuwddezapsuxyqxvmqukzwqmufjcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927581.5137506-242-55954094286037/AnsiballZ_stat.py'
Dec 05 09:39:41 compute-0 sudo[55051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:41 compute-0 python3.9[55053]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:39:41 compute-0 sudo[55051]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:42 compute-0 sudo[55174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jczkuoncrmpopqwbepodojnqkkqgelrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927581.5137506-242-55954094286037/AnsiballZ_copy.py'
Dec 05 09:39:42 compute-0 sudo[55174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:42 compute-0 python3.9[55176]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927581.5137506-242-55954094286037/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:39:42 compute-0 sudo[55174]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:43 compute-0 sudo[55326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdvgdzbwgwfeookbwmmadcltxwtwjpac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927582.856069-290-193173962773459/AnsiballZ_ini_file.py'
Dec 05 09:39:43 compute-0 sudo[55326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:43 compute-0 python3.9[55328]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:39:43 compute-0 sudo[55326]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:43 compute-0 sudo[55478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcnkintqbmjgpegjetofxoansrafkkqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927583.687765-290-64344964260563/AnsiballZ_ini_file.py'
Dec 05 09:39:43 compute-0 sudo[55478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:44 compute-0 python3.9[55480]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:39:44 compute-0 sudo[55478]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:44 compute-0 sudo[55630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adzmyhfbfojkpdsxyvmyswnboafxrsiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927584.2757766-290-207447359651805/AnsiballZ_ini_file.py'
Dec 05 09:39:44 compute-0 sudo[55630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:44 compute-0 python3.9[55632]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:39:44 compute-0 sudo[55630]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:45 compute-0 sudo[55782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtdrqmeubysqtlrwvtmytacwgssjviqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927584.844154-290-144209663370266/AnsiballZ_ini_file.py'
Dec 05 09:39:45 compute-0 sudo[55782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:45 compute-0 python3.9[55784]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:39:45 compute-0 sudo[55782]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:45 compute-0 irqbalance[784]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 05 09:39:45 compute-0 irqbalance[784]: IRQ 26 affinity is now unmanaged
Dec 05 09:39:46 compute-0 sudo[55934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvfsaiorqexhlwcvycmbklmovkvimpfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927585.9076183-383-172072149951188/AnsiballZ_dnf.py'
Dec 05 09:39:46 compute-0 sudo[55934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:46 compute-0 python3.9[55936]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:39:48 compute-0 sudo[55934]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:49 compute-0 sudo[56087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izfstdvjwpylbuaaxumvbnhbeeqkvdkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927589.0770452-416-5222074614879/AnsiballZ_setup.py'
Dec 05 09:39:49 compute-0 sudo[56087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:49 compute-0 python3.9[56089]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:39:49 compute-0 sudo[56087]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:50 compute-0 sudo[56241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knpzcidduzrnlkrtjwbvchdurtuhrbna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927589.8820825-440-248211461964579/AnsiballZ_stat.py'
Dec 05 09:39:50 compute-0 sudo[56241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:50 compute-0 python3.9[56243]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:39:50 compute-0 sudo[56241]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:50 compute-0 sudo[56393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyvwmbuhpdxjzxrjekhjbaxeizmpllxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927590.6932917-467-26454803838950/AnsiballZ_stat.py'
Dec 05 09:39:50 compute-0 sudo[56393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:51 compute-0 python3.9[56395]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:39:51 compute-0 sudo[56393]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:51 compute-0 sudo[56545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzlcxancrweuomequmbhsqtfebbiylzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927591.544728-497-236222965026495/AnsiballZ_command.py'
Dec 05 09:39:51 compute-0 sudo[56545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:52 compute-0 python3.9[56547]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:39:52 compute-0 sudo[56545]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:52 compute-0 sudo[56698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adfxnaonnwsabminekzmhtdhwmedrylb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927592.4041831-527-273828685213382/AnsiballZ_service_facts.py'
Dec 05 09:39:52 compute-0 sudo[56698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:53 compute-0 python3.9[56700]: ansible-service_facts Invoked
Dec 05 09:39:53 compute-0 network[56717]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:39:53 compute-0 network[56718]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:39:53 compute-0 network[56719]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:39:55 compute-0 sudo[56698]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:57 compute-0 sudo[57002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uksojdehxiokuseuvfbfzypuzoctsaaf ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764927596.8576422-572-180868335203054/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764927596.8576422-572-180868335203054/args'
Dec 05 09:39:57 compute-0 sudo[57002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:57 compute-0 sudo[57002]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:57 compute-0 sudo[57169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzkvpuywbvdpaixassaxthekpdilzzxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927597.6048143-605-106031540841189/AnsiballZ_dnf.py'
Dec 05 09:39:57 compute-0 sudo[57169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:58 compute-0 python3.9[57171]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:40:00 compute-0 sudo[57169]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:01 compute-0 sudo[57322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfbznllbofjowmrjcryzngsksxvlylwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927600.5676568-644-177370294766518/AnsiballZ_package_facts.py'
Dec 05 09:40:01 compute-0 sudo[57322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:01 compute-0 python3.9[57324]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 05 09:40:01 compute-0 sudo[57322]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:02 compute-0 sudo[57474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btfkafxjgtiighyrmoroawocnrbicift ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927602.4210987-674-107017475018062/AnsiballZ_stat.py'
Dec 05 09:40:02 compute-0 sudo[57474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:02 compute-0 python3.9[57476]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:03 compute-0 sudo[57474]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:03 compute-0 sudo[57599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahzjcetijaeyccbjejoqrjpstzkhyhcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927602.4210987-674-107017475018062/AnsiballZ_copy.py'
Dec 05 09:40:03 compute-0 sudo[57599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:03 compute-0 python3.9[57601]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927602.4210987-674-107017475018062/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:03 compute-0 sudo[57599]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:04 compute-0 sudo[57753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sejpowceqsdtxsxiiavuihqziobkvwrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927603.792622-719-25334899356625/AnsiballZ_stat.py'
Dec 05 09:40:04 compute-0 sudo[57753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:04 compute-0 python3.9[57755]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:04 compute-0 sudo[57753]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:04 compute-0 sudo[57878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stouzipulwlmktjywrwzeomolyhrsatp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927603.792622-719-25334899356625/AnsiballZ_copy.py'
Dec 05 09:40:04 compute-0 sudo[57878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:04 compute-0 python3.9[57880]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927603.792622-719-25334899356625/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:04 compute-0 sudo[57878]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:06 compute-0 sudo[58032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qragigtbgevbfkefkeyaoofpedisjdxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927606.0136714-782-21074158591350/AnsiballZ_lineinfile.py'
Dec 05 09:40:06 compute-0 sudo[58032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:06 compute-0 python3.9[58034]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:06 compute-0 sudo[58032]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:08 compute-0 sudo[58186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxkrcfnmvljcuopcktejuurbcicnfnea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927607.758044-827-155007817883943/AnsiballZ_setup.py'
Dec 05 09:40:08 compute-0 sudo[58186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:08 compute-0 python3.9[58188]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:40:08 compute-0 sudo[58186]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:09 compute-0 sudo[58270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dquklaxehdmruuooqtuoprwxwtsaitec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927607.758044-827-155007817883943/AnsiballZ_systemd.py'
Dec 05 09:40:09 compute-0 sudo[58270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:09 compute-0 python3.9[58272]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:40:09 compute-0 sudo[58270]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:10 compute-0 sudo[58424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmnnhabauxbkkrgalqxxombvrkbitvwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927610.3779213-875-212107794535764/AnsiballZ_setup.py'
Dec 05 09:40:10 compute-0 sudo[58424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:10 compute-0 python3.9[58426]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:40:11 compute-0 sudo[58424]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:11 compute-0 sudo[58508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzkcsejqmeejmirteitnqoyohvnzxkdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927610.3779213-875-212107794535764/AnsiballZ_systemd.py'
Dec 05 09:40:11 compute-0 sudo[58508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:11 compute-0 python3.9[58510]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:40:11 compute-0 chronyd[793]: chronyd exiting
Dec 05 09:40:11 compute-0 systemd[1]: Stopping NTP client/server...
Dec 05 09:40:11 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 05 09:40:11 compute-0 systemd[1]: Stopped NTP client/server.
Dec 05 09:40:11 compute-0 systemd[1]: Starting NTP client/server...
Dec 05 09:40:11 compute-0 chronyd[58518]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 05 09:40:11 compute-0 chronyd[58518]: Frequency -27.032 +/- 0.137 ppm read from /var/lib/chrony/drift
Dec 05 09:40:11 compute-0 chronyd[58518]: Loaded seccomp filter (level 2)
Dec 05 09:40:11 compute-0 systemd[1]: Started NTP client/server.
Dec 05 09:40:11 compute-0 sudo[58508]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:12 compute-0 sshd-session[53569]: Connection closed by 192.168.122.30 port 58686
Dec 05 09:40:12 compute-0 sshd-session[53566]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:40:12 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 05 09:40:12 compute-0 systemd[1]: session-12.scope: Consumed 26.627s CPU time.
Dec 05 09:40:12 compute-0 systemd-logind[789]: Session 12 logged out. Waiting for processes to exit.
Dec 05 09:40:12 compute-0 systemd-logind[789]: Removed session 12.
Dec 05 09:40:17 compute-0 sshd-session[58544]: Accepted publickey for zuul from 192.168.122.30 port 55302 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:40:17 compute-0 systemd-logind[789]: New session 13 of user zuul.
Dec 05 09:40:17 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 05 09:40:17 compute-0 sshd-session[58544]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:40:18 compute-0 sudo[58697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euirqpcrpmknsqtcthevddkjjetsmtbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927617.9320977-26-111540115509348/AnsiballZ_file.py'
Dec 05 09:40:18 compute-0 sudo[58697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:18 compute-0 python3.9[58699]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:18 compute-0 sudo[58697]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:19 compute-0 sudo[58849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkqnewwhzbxqqzbdpplhczxbuttyfvws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927618.8459914-62-15192791956251/AnsiballZ_stat.py'
Dec 05 09:40:19 compute-0 sudo[58849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:19 compute-0 python3.9[58851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:19 compute-0 sudo[58849]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:20 compute-0 sudo[58972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdywvfkkrgnnaytmcdrnqnpmxpgortsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927618.8459914-62-15192791956251/AnsiballZ_copy.py'
Dec 05 09:40:20 compute-0 sudo[58972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:20 compute-0 python3.9[58974]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927618.8459914-62-15192791956251/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:20 compute-0 sudo[58972]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:20 compute-0 sshd-session[58547]: Connection closed by 192.168.122.30 port 55302
Dec 05 09:40:20 compute-0 sshd-session[58544]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:40:20 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 05 09:40:20 compute-0 systemd[1]: session-13.scope: Consumed 1.645s CPU time.
Dec 05 09:40:20 compute-0 systemd-logind[789]: Session 13 logged out. Waiting for processes to exit.
Dec 05 09:40:20 compute-0 systemd-logind[789]: Removed session 13.
Dec 05 09:40:26 compute-0 sshd-session[58999]: Accepted publickey for zuul from 192.168.122.30 port 38142 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:40:26 compute-0 systemd-logind[789]: New session 14 of user zuul.
Dec 05 09:40:26 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 05 09:40:26 compute-0 sshd-session[58999]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:40:26 compute-0 python3.9[59152]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:40:27 compute-0 sudo[59306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uesmtozwqsmovwqiyiyeavikrwxcnupn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927627.4951935-59-263831047713907/AnsiballZ_file.py'
Dec 05 09:40:27 compute-0 sudo[59306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:28 compute-0 python3.9[59308]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:28 compute-0 sudo[59306]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:28 compute-0 sudo[59481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdfdzpbzpzonczbynqdysrhijhobdbgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927628.336148-83-224239778779099/AnsiballZ_stat.py'
Dec 05 09:40:28 compute-0 sudo[59481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:28 compute-0 python3.9[59483]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:29 compute-0 sudo[59481]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:29 compute-0 sudo[59604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvhajllqyhnemuhrhibzzskjnemlqufk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927628.336148-83-224239778779099/AnsiballZ_copy.py'
Dec 05 09:40:29 compute-0 sudo[59604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:29 compute-0 python3.9[59606]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764927628.336148-83-224239778779099/.source.json _original_basename=.tqcgx5bm follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:29 compute-0 sudo[59604]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:30 compute-0 sudo[59756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srtxrddkxvlgxzbcrgmxikxuvvwpzqao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927630.340678-152-48945266767311/AnsiballZ_stat.py'
Dec 05 09:40:30 compute-0 sudo[59756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:30 compute-0 python3.9[59758]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:30 compute-0 sudo[59756]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:31 compute-0 sudo[59879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbtrjlecmyinnreqnhwriiyhgyalxotn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927630.340678-152-48945266767311/AnsiballZ_copy.py'
Dec 05 09:40:31 compute-0 sudo[59879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:31 compute-0 python3.9[59881]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927630.340678-152-48945266767311/.source _original_basename=.1lejx265 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:31 compute-0 sudo[59879]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:31 compute-0 sudo[60031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxllytsafaccrbktuyxkntqstywrilks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927631.6036918-200-98459691208125/AnsiballZ_file.py'
Dec 05 09:40:31 compute-0 sudo[60031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:32 compute-0 python3.9[60033]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:40:32 compute-0 sudo[60031]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:32 compute-0 sudo[60183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgnzbrmdqfzfcmnrrlzmzsaofaicrqwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927632.289998-224-117659428576520/AnsiballZ_stat.py'
Dec 05 09:40:32 compute-0 sudo[60183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:32 compute-0 python3.9[60185]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:32 compute-0 sudo[60183]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:33 compute-0 sudo[60306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axacyymrlmoowujzfpwxoprxruvciyjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927632.289998-224-117659428576520/AnsiballZ_copy.py'
Dec 05 09:40:33 compute-0 sudo[60306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:33 compute-0 python3.9[60308]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927632.289998-224-117659428576520/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:40:33 compute-0 sudo[60306]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:33 compute-0 sudo[60458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfqkyaqmqlslfhhfnonulaehpzbgmqxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927633.430992-224-268522140666056/AnsiballZ_stat.py'
Dec 05 09:40:33 compute-0 sudo[60458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:33 compute-0 python3.9[60460]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:33 compute-0 sudo[60458]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:34 compute-0 sudo[60581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnpwjmntarideguizslbykczmcrrhkbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927633.430992-224-268522140666056/AnsiballZ_copy.py'
Dec 05 09:40:34 compute-0 sudo[60581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:34 compute-0 python3.9[60583]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927633.430992-224-268522140666056/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:40:34 compute-0 sudo[60581]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:35 compute-0 sudo[60733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cttgvvyqvihqlazickkaqcnvmqyruwzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927634.780352-311-32104189507705/AnsiballZ_file.py'
Dec 05 09:40:35 compute-0 sudo[60733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:35 compute-0 python3.9[60735]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:35 compute-0 sudo[60733]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:35 compute-0 sudo[60885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sndrollvstuxycwpjjfgvnvatnazclsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927635.437909-335-261962808642994/AnsiballZ_stat.py'
Dec 05 09:40:35 compute-0 sudo[60885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:35 compute-0 python3.9[60887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:35 compute-0 sudo[60885]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:36 compute-0 sudo[61008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpxmhryboeqjvinqozazgoitshxgkzrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927635.437909-335-261962808642994/AnsiballZ_copy.py'
Dec 05 09:40:36 compute-0 sudo[61008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:36 compute-0 python3.9[61010]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927635.437909-335-261962808642994/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:36 compute-0 sudo[61008]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:36 compute-0 sudo[61160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckmghejvnagvpzoqllxixvjnhzcmfxgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927636.6883605-380-166327072909336/AnsiballZ_stat.py'
Dec 05 09:40:36 compute-0 sudo[61160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:37 compute-0 python3.9[61162]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:37 compute-0 sudo[61160]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:37 compute-0 sudo[61283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-offyqwdcdbglxctirilhrklrhdlykzvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927636.6883605-380-166327072909336/AnsiballZ_copy.py'
Dec 05 09:40:37 compute-0 sudo[61283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:37 compute-0 python3.9[61285]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927636.6883605-380-166327072909336/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:37 compute-0 sudo[61283]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:38 compute-0 sudo[61435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjdoflhfvexvhwuwrrznmjhqmfliegbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927638.187921-425-108919258896614/AnsiballZ_systemd.py'
Dec 05 09:40:38 compute-0 sudo[61435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:39 compute-0 python3.9[61437]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:40:39 compute-0 systemd[1]: Reloading.
Dec 05 09:40:39 compute-0 systemd-rc-local-generator[61457]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:40:39 compute-0 systemd-sysv-generator[61462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:40:39 compute-0 systemd[1]: Reloading.
Dec 05 09:40:39 compute-0 systemd-sysv-generator[61507]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:40:39 compute-0 systemd-rc-local-generator[61503]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:40:39 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 05 09:40:39 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 05 09:40:39 compute-0 sudo[61435]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:40 compute-0 sudo[61663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbwnbkdjxgebziytihxvuwgxpodoejxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927640.1227553-449-170885759064396/AnsiballZ_stat.py'
Dec 05 09:40:40 compute-0 sudo[61663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:40 compute-0 python3.9[61665]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:40 compute-0 sudo[61663]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:40 compute-0 sudo[61786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuanrlutboqizgquhduzwktvlaokzuxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927640.1227553-449-170885759064396/AnsiballZ_copy.py'
Dec 05 09:40:40 compute-0 sudo[61786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:41 compute-0 python3.9[61788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927640.1227553-449-170885759064396/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:41 compute-0 sudo[61786]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:41 compute-0 sudo[61938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bilimqiosmpocvwxukzxgyfptmebdryy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927641.4666035-494-231691764703564/AnsiballZ_stat.py'
Dec 05 09:40:41 compute-0 sudo[61938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:41 compute-0 python3.9[61940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:41 compute-0 sudo[61938]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:42 compute-0 sudo[62061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idcaoezvudgcpuykeldmawgigkvvxdeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927641.4666035-494-231691764703564/AnsiballZ_copy.py'
Dec 05 09:40:42 compute-0 sudo[62061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:42 compute-0 python3.9[62063]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927641.4666035-494-231691764703564/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:42 compute-0 sudo[62061]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:43 compute-0 sudo[62213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtoevnphenbwpmtonbqowlwttvbnqwmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927642.7599654-539-58206026164045/AnsiballZ_systemd.py'
Dec 05 09:40:43 compute-0 sudo[62213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:43 compute-0 python3.9[62215]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:40:43 compute-0 systemd[1]: Reloading.
Dec 05 09:40:43 compute-0 systemd-sysv-generator[62248]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:40:43 compute-0 systemd-rc-local-generator[62245]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:40:43 compute-0 systemd[1]: Reloading.
Dec 05 09:40:43 compute-0 systemd-rc-local-generator[62276]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:40:43 compute-0 systemd-sysv-generator[62283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:40:43 compute-0 systemd[1]: Starting Create netns directory...
Dec 05 09:40:43 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 09:40:43 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 09:40:43 compute-0 systemd[1]: Finished Create netns directory.
Dec 05 09:40:43 compute-0 sudo[62213]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:44 compute-0 python3.9[62442]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:40:44 compute-0 network[62459]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:40:44 compute-0 network[62460]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:40:44 compute-0 network[62461]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:40:49 compute-0 sudo[62721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgvmutwgumrsdgvrgorbdouefqnfnyad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927648.9253554-587-186293586988963/AnsiballZ_systemd.py'
Dec 05 09:40:49 compute-0 sudo[62721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:49 compute-0 python3.9[62723]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:40:49 compute-0 systemd[1]: Reloading.
Dec 05 09:40:49 compute-0 systemd-rc-local-generator[62752]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:40:49 compute-0 systemd-sysv-generator[62755]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:40:49 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 05 09:40:49 compute-0 iptables.init[62762]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 05 09:40:50 compute-0 iptables.init[62762]: iptables: Flushing firewall rules: [  OK  ]
Dec 05 09:40:50 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 05 09:40:50 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 05 09:40:50 compute-0 sudo[62721]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:50 compute-0 sudo[62956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpvzghlaokdqjhxirvzugsohuirlcpxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927650.2137978-587-127653758377718/AnsiballZ_systemd.py'
Dec 05 09:40:50 compute-0 sudo[62956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:50 compute-0 python3.9[62958]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:40:50 compute-0 sudo[62956]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:51 compute-0 sudo[63110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgkfssmfqqbkicsnletkblrskseqjuog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927651.398102-635-104371289699306/AnsiballZ_systemd.py'
Dec 05 09:40:51 compute-0 sudo[63110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:51 compute-0 python3.9[63112]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:40:52 compute-0 systemd[1]: Reloading.
Dec 05 09:40:52 compute-0 systemd-sysv-generator[63143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:40:52 compute-0 systemd-rc-local-generator[63140]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:40:52 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 05 09:40:52 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 05 09:40:52 compute-0 sudo[63110]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:53 compute-0 sudo[63303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlewkxnzfxiqedqhpzxtwizfgyuclbss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927652.6047354-659-8668170905641/AnsiballZ_command.py'
Dec 05 09:40:53 compute-0 sudo[63303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:53 compute-0 python3.9[63305]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:40:53 compute-0 sudo[63303]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:54 compute-0 sudo[63456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzcornjtnteffpxlqcwqklzrmfoaxeqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927653.9760797-701-259299882427626/AnsiballZ_stat.py'
Dec 05 09:40:54 compute-0 sudo[63456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:54 compute-0 python3.9[63458]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:54 compute-0 sudo[63456]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:54 compute-0 sudo[63581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsplxyjogouhyplgyobusadtvofrsmwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927653.9760797-701-259299882427626/AnsiballZ_copy.py'
Dec 05 09:40:54 compute-0 sudo[63581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:55 compute-0 python3.9[63583]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927653.9760797-701-259299882427626/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:55 compute-0 sudo[63581]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:55 compute-0 sudo[63734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpwyppbtndqrjrevgtlocpnadcompgrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927655.4865918-746-120817683638341/AnsiballZ_systemd.py'
Dec 05 09:40:55 compute-0 sudo[63734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:56 compute-0 python3.9[63736]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:40:56 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 05 09:40:56 compute-0 sshd[1005]: Received SIGHUP; restarting.
Dec 05 09:40:56 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 05 09:40:56 compute-0 sshd[1005]: Server listening on 0.0.0.0 port 22.
Dec 05 09:40:56 compute-0 sshd[1005]: Server listening on :: port 22.
Dec 05 09:40:56 compute-0 sudo[63734]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:56 compute-0 sudo[63890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iizqbwzewduwnikpuurmclzmmzyudefv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927656.507913-770-247528364466677/AnsiballZ_file.py'
Dec 05 09:40:56 compute-0 sudo[63890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:57 compute-0 python3.9[63892]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:57 compute-0 sudo[63890]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:57 compute-0 sudo[64042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idcrhwfvyaazrtxleayqgjejuqignhix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927657.2146711-794-242174332394500/AnsiballZ_stat.py'
Dec 05 09:40:57 compute-0 sudo[64042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:57 compute-0 python3.9[64044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:40:57 compute-0 sudo[64042]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:57 compute-0 sudo[64165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xepcpipcrshxfswadgklutixmtqetqms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927657.2146711-794-242174332394500/AnsiballZ_copy.py'
Dec 05 09:40:57 compute-0 sudo[64165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:58 compute-0 python3.9[64167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927657.2146711-794-242174332394500/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:40:58 compute-0 sudo[64165]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:59 compute-0 sudo[64317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqsraahejwnxzicybycravkpztycugxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927658.8621633-848-116234030046681/AnsiballZ_timezone.py'
Dec 05 09:40:59 compute-0 sudo[64317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:59 compute-0 python3.9[64319]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 05 09:40:59 compute-0 systemd[1]: Starting Time & Date Service...
Dec 05 09:40:59 compute-0 systemd[1]: Started Time & Date Service.
Dec 05 09:40:59 compute-0 sudo[64317]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:00 compute-0 sudo[64473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipdugutweiqvwibdtnvrxjqrgnwfdcpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927659.9680717-875-112633944569508/AnsiballZ_file.py'
Dec 05 09:41:00 compute-0 sudo[64473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:00 compute-0 python3.9[64475]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:00 compute-0 sudo[64473]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:00 compute-0 sudo[64625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybzuptbpirqiplppvhpweqfvttrquvca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927660.7109718-899-20007648107297/AnsiballZ_stat.py'
Dec 05 09:41:01 compute-0 sudo[64625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:01 compute-0 python3.9[64627]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:41:01 compute-0 sudo[64625]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:01 compute-0 sudo[64748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoykzhwypgyvwhsdlmpjgocsriohjcjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927660.7109718-899-20007648107297/AnsiballZ_copy.py'
Dec 05 09:41:01 compute-0 sudo[64748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:01 compute-0 python3.9[64750]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927660.7109718-899-20007648107297/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:01 compute-0 sudo[64748]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:02 compute-0 sudo[64900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umtgjqarvhyqzljvvhwevztwacrsdvgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927661.9717221-944-138170096639979/AnsiballZ_stat.py'
Dec 05 09:41:02 compute-0 sudo[64900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:02 compute-0 python3.9[64902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:41:02 compute-0 sudo[64900]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:02 compute-0 sudo[65023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecbipakgmuhproxgzqpflvaxzbjwbbra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927661.9717221-944-138170096639979/AnsiballZ_copy.py'
Dec 05 09:41:02 compute-0 sudo[65023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:02 compute-0 python3.9[65025]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927661.9717221-944-138170096639979/.source.yaml _original_basename=.xdz7u5sp follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:02 compute-0 sudo[65023]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:03 compute-0 sudo[65175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muyxtznmjcjhrwmfqbiwaksjztveyyna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927663.289902-989-114095225714125/AnsiballZ_stat.py'
Dec 05 09:41:03 compute-0 sudo[65175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:03 compute-0 python3.9[65177]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:41:03 compute-0 sudo[65175]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:04 compute-0 sudo[65298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzqcijignzzqaxuifrtzacrqmqxhyqch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927663.289902-989-114095225714125/AnsiballZ_copy.py'
Dec 05 09:41:04 compute-0 sudo[65298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:04 compute-0 python3.9[65300]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927663.289902-989-114095225714125/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:04 compute-0 sudo[65298]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:04 compute-0 sudo[65450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcdcyghhnpdqtabfgnbvtvgtfxucnbpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927664.5912015-1034-237444271262418/AnsiballZ_command.py'
Dec 05 09:41:04 compute-0 sudo[65450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:05 compute-0 python3.9[65452]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:41:05 compute-0 sudo[65450]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:05 compute-0 sudo[65603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kahsecqyuosdmnnqgtubjktzajzfqykc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927665.3039012-1058-166862390171970/AnsiballZ_command.py'
Dec 05 09:41:05 compute-0 sudo[65603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:05 compute-0 python3.9[65605]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:41:05 compute-0 sudo[65603]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:06 compute-0 sudo[65756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uckicbxblkdxzbvcajfwmesiyuqnlnmh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927666.0064425-1082-197176270197747/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 09:41:06 compute-0 sudo[65756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:06 compute-0 python3[65758]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 09:41:06 compute-0 sudo[65756]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:07 compute-0 sudo[65908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cicsbzujxpribwbbjlaszgbqzwzztune ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927666.8188887-1106-23628026560705/AnsiballZ_stat.py'
Dec 05 09:41:07 compute-0 sudo[65908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:07 compute-0 python3.9[65910]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:41:07 compute-0 sudo[65908]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:07 compute-0 sudo[66031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leftmiyagtacipktianvszytbshhxdlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927666.8188887-1106-23628026560705/AnsiballZ_copy.py'
Dec 05 09:41:07 compute-0 sudo[66031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:07 compute-0 python3.9[66033]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927666.8188887-1106-23628026560705/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:08 compute-0 sudo[66031]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:08 compute-0 sudo[66183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvnbgmeqeisqtzimpcitrlxbggyhbwce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927668.169322-1151-215151614306571/AnsiballZ_stat.py'
Dec 05 09:41:08 compute-0 sudo[66183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:08 compute-0 python3.9[66185]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:41:08 compute-0 sudo[66183]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:08 compute-0 sudo[66306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-segctmhaubraboxzvgdwgskwiqzcmegn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927668.169322-1151-215151614306571/AnsiballZ_copy.py'
Dec 05 09:41:08 compute-0 sudo[66306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:09 compute-0 python3.9[66308]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927668.169322-1151-215151614306571/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:09 compute-0 sudo[66306]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:09 compute-0 sudo[66458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgskzloqpeacxwexipyxoppvleipqihg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927669.5049505-1196-225761764783964/AnsiballZ_stat.py'
Dec 05 09:41:09 compute-0 sudo[66458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:09 compute-0 python3.9[66460]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:41:10 compute-0 sudo[66458]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:10 compute-0 sudo[66581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtsqsodqqnkmmmsxxvdlkehnbrxzzrmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927669.5049505-1196-225761764783964/AnsiballZ_copy.py'
Dec 05 09:41:10 compute-0 sudo[66581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:10 compute-0 python3.9[66583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927669.5049505-1196-225761764783964/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:10 compute-0 sudo[66581]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:11 compute-0 sudo[66733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sngxkvthcujeandutxeifasbfkvxeeou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927670.8407598-1241-225641905985473/AnsiballZ_stat.py'
Dec 05 09:41:11 compute-0 sudo[66733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:11 compute-0 python3.9[66735]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:41:11 compute-0 sudo[66733]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:11 compute-0 sudo[66856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfgqlzrlmpelzpanqxtfuoopqdkwhwom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927670.8407598-1241-225641905985473/AnsiballZ_copy.py'
Dec 05 09:41:11 compute-0 sudo[66856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:11 compute-0 python3.9[66858]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927670.8407598-1241-225641905985473/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:11 compute-0 sudo[66856]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:12 compute-0 sudo[67008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xedhalmwlzvmcojcvxfrkdxkmzfpergy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927672.137606-1286-216994784568267/AnsiballZ_stat.py'
Dec 05 09:41:12 compute-0 sudo[67008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:12 compute-0 python3.9[67010]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:41:12 compute-0 sudo[67008]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:13 compute-0 sudo[67131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmpsaiwozoidkrdcfzauhqnzxcycdvng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927672.137606-1286-216994784568267/AnsiballZ_copy.py'
Dec 05 09:41:13 compute-0 sudo[67131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:13 compute-0 python3.9[67133]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927672.137606-1286-216994784568267/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:13 compute-0 sudo[67131]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:13 compute-0 sudo[67283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wajzxafhfdcdrbgaoglswaughutupptk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927673.496446-1331-49235239239636/AnsiballZ_file.py'
Dec 05 09:41:13 compute-0 sudo[67283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:13 compute-0 python3.9[67285]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:13 compute-0 sudo[67283]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:14 compute-0 sudo[67435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-astovugjfdkplwcihqxhcvzkzdlkjtbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927674.2064598-1355-102237538313298/AnsiballZ_command.py'
Dec 05 09:41:14 compute-0 sudo[67435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:14 compute-0 python3.9[67437]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:41:14 compute-0 sudo[67435]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:15 compute-0 sudo[67594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhaynizogregdqozwpxknjwibikksiul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927674.9669154-1379-135647640891686/AnsiballZ_blockinfile.py'
Dec 05 09:41:15 compute-0 sudo[67594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:15 compute-0 python3.9[67596]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:15 compute-0 sudo[67594]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:16 compute-0 sudo[67747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njxjdvwrgdvsuqroqayqizgawolxiaum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927675.930725-1406-266155321103700/AnsiballZ_file.py'
Dec 05 09:41:16 compute-0 sudo[67747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:16 compute-0 python3.9[67749]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:16 compute-0 sudo[67747]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:16 compute-0 sudo[67899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itxcctmzfmlrozvkgzchrrfcywrroowv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927676.5550077-1406-205822474962211/AnsiballZ_file.py'
Dec 05 09:41:16 compute-0 sudo[67899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:16 compute-0 python3.9[67901]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:17 compute-0 sudo[67899]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:17 compute-0 sudo[68051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkpvtrakcxsdpdotjsxqkxqanysgzxof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927677.320854-1451-204569451729659/AnsiballZ_mount.py'
Dec 05 09:41:17 compute-0 sudo[68051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:18 compute-0 python3.9[68053]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 05 09:41:18 compute-0 sudo[68051]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:18 compute-0 sudo[68204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvzhomxhqyzllhwjnejagpqcaurbjjwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927678.179113-1451-174265629706146/AnsiballZ_mount.py'
Dec 05 09:41:18 compute-0 sudo[68204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:18 compute-0 python3.9[68206]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 05 09:41:18 compute-0 sudo[68204]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:19 compute-0 sshd-session[59002]: Connection closed by 192.168.122.30 port 38142
Dec 05 09:41:19 compute-0 sshd-session[58999]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:41:19 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 05 09:41:19 compute-0 systemd[1]: session-14.scope: Consumed 35.442s CPU time.
Dec 05 09:41:19 compute-0 systemd-logind[789]: Session 14 logged out. Waiting for processes to exit.
Dec 05 09:41:19 compute-0 systemd-logind[789]: Removed session 14.
Dec 05 09:41:24 compute-0 sshd-session[68232]: Accepted publickey for zuul from 192.168.122.30 port 57234 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:41:24 compute-0 systemd-logind[789]: New session 15 of user zuul.
Dec 05 09:41:24 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 05 09:41:24 compute-0 sshd-session[68232]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:41:25 compute-0 sudo[68385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxtlpmnsruvavqrhfcehhisphvwfdcjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927684.4754007-18-80746404336137/AnsiballZ_tempfile.py'
Dec 05 09:41:25 compute-0 sudo[68385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:25 compute-0 python3.9[68387]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 05 09:41:25 compute-0 sudo[68385]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:26 compute-0 sudo[68537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpefxovppvgynsjnkviboisqhwupzloh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927685.9730837-54-78515906568912/AnsiballZ_stat.py'
Dec 05 09:41:26 compute-0 sudo[68537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:26 compute-0 python3.9[68539]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:41:26 compute-0 sudo[68537]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:27 compute-0 sudo[68689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hikiehswehqhqpxxklikyodjkfgdlrlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927687.220119-84-22202715187039/AnsiballZ_setup.py'
Dec 05 09:41:27 compute-0 sudo[68689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:28 compute-0 python3.9[68691]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:41:28 compute-0 sudo[68689]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:28 compute-0 sudo[68841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcxgteytsfykdrhxovbptqttciqxcqfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927688.4027135-109-196303985652721/AnsiballZ_blockinfile.py'
Dec 05 09:41:28 compute-0 sudo[68841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:29 compute-0 python3.9[68843]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxQNlo8LJIA8xfbJGKSdMV98UOyu9sX4A5uTtBflRAOkH2wXRmdsECPkzI5G44w402q0xg3frbgD+BCh0dOaEjB53lSL9fiuoFoP2UMDOiBdr13eOasoBklzMszBfqWrVOks662bXDBzMQ61eXcXHiU5QWmKCS1HrupYfTHcabdj2EL/qsRRwL8Auc8eBHxl3VUFxB05r2Uu4Ls3Rt42dXItXqSr9ALeWVbYPQRh5O0Q8GItA45C+msxeJMBFgE8UcN3mm5qgcAxLZEViqYfKUEoXhxs57riJWdfojrm8a0UCNV9uLTW37s06Hg5QXXpwRm8AQqH4kXiSb/I+Dx8y9V568G3r2UAIy/DXBDgpu0+eVaNleKpcClTi/gUXjVedABom8PDw4ot8kdwBujvaB5J7Fmf9yi3XbdjQlMU0F+v8TTLmhUTMZbcSdlvH6ZEdJUp+cs6h/dep1Ia2NdljpuBse8DVa9vLu/amki3Qb08HvTtMHJVqHtKzSn+saAA8=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYyvL99EFBHDm2asxUS8r44IHbcLB7lwrOEDjFJjq8+
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCHaOVGjK9mhAo2eO+zVKbUHICCg0NK+AxIuZHw1DeeR3t1zLuA1LozuMzNRiZbW6GoVgw9PyUclcy8Qm1CEzNw=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCCGYaoVdx41VJW+np05HpScB1pj66kWjXXdk8rueVCQEL0TF1cfRmP7kNWhq+oLpDC2qeyocH2is9CJbCeF3nMnz3pcXgfNJPsl77PpHuYx9+dJ/l7aiEp2MNYzamDc3S92PWUyMWNOlCRfuBrHSZBAvnB0xD3R99yvMHKc67cXzjVV4nSkUQqBv0MrH9HgFmhfG3gVFbdDXCRgFIG2h+ZF1DPnbsrsNdFK2ALSArtL+sfxDi+msM9PxuVl/C9PiKNRcHMUcrE3V3DjbRVO3nzVs9HZ1bJMyZodXLzB1JDhL1653n8Cud1gpE0PC7bhd3UIlCeSOpZAc0+Dn8vSvN+RHUmd7gXWo5cSXROdbzLhtT83Tzh/tl0dfNd6I7+//D75TB6vKSMnF921Gt1OkB29orcpfiGcS0ibDi8By5Xy1IEq/3DLbUNKAJ38yvdagfMHVoFlITKztKyx00vtL3Vhq6d/+p7XPkb1pJA2EvTvWJI8J5fq5UyFJ6V/gxgqk=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM8586CWFNOaDluakc5a5Mj5ccpeoURPnbi800rdSC11
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB3iz1XalQfQsvPdIq2L/sK4J7E2PFRYviaI0Y8WL6ihsqpbSqR/q+QE3EwzZARmbL5is6sKoBExWB+qAZZw/mw=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqOVNDEMJj7JrxcivnSQxHLR3UNAyfEW275XwwR2jqDQwc5jHlKQN4yofccTvovJGu6nT/aKnvQp7UAX8rx+S+QqDLDzdFmyzd+Fu2yNA2xzyOG24YuBHbDF475BhB48C+7qV2QB5mwggpvanoqX1kcVbWglrlLBsuq6qRxpzz2gWprt9FGznLxXJI3JngVDt0Bbnug36yxZrL5c4l6eXNa2wuWzOG1uUcY73v39V1eLZrEfwrKGrHZuACNEl5UV8i4XepA5a/s8VDbe4o3fbA8ntB6z/oDq73X7wYyRME4HKlPcXoY57jsbPsYeg+Z3uYZ+8wCFJUwhfZjQFuUu6UEqkjMsl/DLQzI4OjWenkpnWdSxGIqYsvn0tlggV6eclljuuW2JyQde+uS8l0XAPU+6aT1VDWnUSr0w/mXWcVqjimgDfkY5UZk8z9YJTydk11MvySj4gz8WqEuTeUH4dGmJlis79zWcTDnldC0pDCsj9CZNBrcxuoePr7pKMh9MM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA1JP08gBa69YpXGOxf1p0VMkT6sTUHT+UQjh3TGmf48
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOT71mYgQwbQpmRBwr+IUF/Vj3hJuhHTHm0L+1h7O5z+V6giaTp0V2h33GCQ7WbEntvKd2CSppF3vBCKuE1b+hI=
                                             create=True mode=0644 path=/tmp/ansible.7elkamy8 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:29 compute-0 sudo[68841]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:29 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 09:41:29 compute-0 sudo[68995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itchenavpwnybtffnzdehhssiqviqgea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927689.4771717-133-103269824812416/AnsiballZ_command.py'
Dec 05 09:41:29 compute-0 sudo[68995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:30 compute-0 python3.9[68997]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.7elkamy8' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:41:30 compute-0 sudo[68995]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:30 compute-0 sudo[69149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jflgfbnxbfrlgpvuveqlwoiihlxkxdzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927690.388607-157-152180499434475/AnsiballZ_file.py'
Dec 05 09:41:30 compute-0 sudo[69149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:31 compute-0 python3.9[69151]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.7elkamy8 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:31 compute-0 sudo[69149]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:31 compute-0 sshd-session[68235]: Connection closed by 192.168.122.30 port 57234
Dec 05 09:41:31 compute-0 sshd-session[68232]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:41:31 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 05 09:41:31 compute-0 systemd[1]: session-15.scope: Consumed 3.440s CPU time.
Dec 05 09:41:31 compute-0 systemd-logind[789]: Session 15 logged out. Waiting for processes to exit.
Dec 05 09:41:31 compute-0 systemd-logind[789]: Removed session 15.
Dec 05 09:41:37 compute-0 sshd-session[69176]: Accepted publickey for zuul from 192.168.122.30 port 60894 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:41:37 compute-0 systemd-logind[789]: New session 16 of user zuul.
Dec 05 09:41:37 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 05 09:41:37 compute-0 sshd-session[69176]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:41:38 compute-0 python3.9[69329]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:41:39 compute-0 sudo[69483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhgffjqumpjgimobsbtcwjwzowryrdcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927699.1912897-56-151766800310885/AnsiballZ_systemd.py'
Dec 05 09:41:39 compute-0 sudo[69483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:40 compute-0 python3.9[69485]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 09:41:40 compute-0 sudo[69483]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:40 compute-0 sudo[69637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cquypoxcwckqrbrttcfgotymiglfpedc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927700.3499918-80-190267734392323/AnsiballZ_systemd.py'
Dec 05 09:41:40 compute-0 sudo[69637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:41 compute-0 python3.9[69639]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:41:41 compute-0 sudo[69637]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:41 compute-0 sudo[69790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfmpxcgmvhsqqmybbtwbnhbohcgylahh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927701.483559-107-267443207494348/AnsiballZ_command.py'
Dec 05 09:41:41 compute-0 sudo[69790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:42 compute-0 python3.9[69792]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:41:42 compute-0 sudo[69790]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:43 compute-0 sudo[69943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytogdrpdygfoyfqsggwxzxovfwtcudng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927702.952856-131-99656040472022/AnsiballZ_stat.py'
Dec 05 09:41:43 compute-0 sudo[69943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:43 compute-0 python3.9[69945]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:41:43 compute-0 sudo[69943]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:44 compute-0 sudo[70097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qchrkrgaolzclwtrcfsozcpbxcgzyaql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927703.8075037-155-112333938030955/AnsiballZ_command.py'
Dec 05 09:41:44 compute-0 sudo[70097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:44 compute-0 python3.9[70099]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:41:44 compute-0 sudo[70097]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:45 compute-0 sudo[70252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjpcruheujhossxjyogxemifvybygabv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927704.8312542-179-205344764078354/AnsiballZ_file.py'
Dec 05 09:41:45 compute-0 sudo[70252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:45 compute-0 python3.9[70254]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:45 compute-0 sudo[70252]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:45 compute-0 sshd-session[69179]: Connection closed by 192.168.122.30 port 60894
Dec 05 09:41:45 compute-0 sshd-session[69176]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:41:45 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 05 09:41:45 compute-0 systemd[1]: session-16.scope: Consumed 4.429s CPU time.
Dec 05 09:41:45 compute-0 systemd-logind[789]: Session 16 logged out. Waiting for processes to exit.
Dec 05 09:41:45 compute-0 systemd-logind[789]: Removed session 16.
Dec 05 09:41:51 compute-0 sshd-session[70279]: Accepted publickey for zuul from 192.168.122.30 port 47788 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:41:51 compute-0 systemd-logind[789]: New session 17 of user zuul.
Dec 05 09:41:51 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 05 09:41:51 compute-0 sshd-session[70279]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:41:52 compute-0 python3.9[70432]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:41:53 compute-0 sudo[70586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjsjbpxktkvszkgpwlrzxdsgeepanscu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927712.778191-62-118690150571381/AnsiballZ_setup.py'
Dec 05 09:41:53 compute-0 sudo[70586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:53 compute-0 python3.9[70588]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:41:53 compute-0 sudo[70586]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:53 compute-0 sudo[70670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rebwfhcbygmyxktodxhsacoxcjakdthc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927712.778191-62-118690150571381/AnsiballZ_dnf.py'
Dec 05 09:41:53 compute-0 sudo[70670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:54 compute-0 python3.9[70672]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:41:55 compute-0 sudo[70670]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:56 compute-0 python3.9[70823]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:41:58 compute-0 python3.9[70974]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:41:59 compute-0 python3.9[71124]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:41:59 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:41:59 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:41:59 compute-0 python3.9[71275]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:42:00 compute-0 sshd-session[70282]: Connection closed by 192.168.122.30 port 47788
Dec 05 09:42:00 compute-0 sshd-session[70279]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:42:00 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 05 09:42:00 compute-0 systemd[1]: session-17.scope: Consumed 6.147s CPU time.
Dec 05 09:42:00 compute-0 systemd-logind[789]: Session 17 logged out. Waiting for processes to exit.
Dec 05 09:42:00 compute-0 systemd-logind[789]: Removed session 17.
Dec 05 09:42:11 compute-0 sshd-session[71300]: Accepted publickey for zuul from 38.129.56.31 port 58660 ssh2: RSA SHA256:KFmgdvKpB8DAdlN2nfDmmuFckJgJGHDMrTR5Gyr7RXM
Dec 05 09:42:12 compute-0 systemd-logind[789]: New session 18 of user zuul.
Dec 05 09:42:12 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 05 09:42:12 compute-0 sshd-session[71300]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:42:12 compute-0 sudo[71376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lllhthqbaxjoqozunqvguuutmazmlzod ; /usr/bin/python3'
Dec 05 09:42:12 compute-0 sudo[71376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:12 compute-0 useradd[71380]: new group: name=ceph-admin, GID=42478
Dec 05 09:42:12 compute-0 useradd[71380]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 05 09:42:12 compute-0 sudo[71376]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:13 compute-0 sudo[71462]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odycqjggecxmduocgbbqflotmqsbayzn ; /usr/bin/python3'
Dec 05 09:42:13 compute-0 sudo[71462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:13 compute-0 sudo[71462]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:13 compute-0 sudo[71535]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqykbeectbckvblimgkapbkbflrtjblo ; /usr/bin/python3'
Dec 05 09:42:13 compute-0 sudo[71535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:13 compute-0 sudo[71535]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:14 compute-0 sudo[71585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccfgvdgofihubtpmgrnhyyjfkdfikctc ; /usr/bin/python3'
Dec 05 09:42:14 compute-0 sudo[71585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:14 compute-0 sudo[71585]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:14 compute-0 sudo[71611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvkhxdsujeiujgsbsspgaknzgiziazfn ; /usr/bin/python3'
Dec 05 09:42:14 compute-0 sudo[71611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:14 compute-0 sudo[71611]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:14 compute-0 sudo[71637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaurvvyzqkhjiklhlofpmqqpvdlyrbfv ; /usr/bin/python3'
Dec 05 09:42:14 compute-0 sudo[71637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:15 compute-0 sudo[71637]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:15 compute-0 sudo[71663]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suycksmlrwhgsdckurhwykorexjiksso ; /usr/bin/python3'
Dec 05 09:42:15 compute-0 sudo[71663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:15 compute-0 sudo[71663]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:16 compute-0 sudo[71741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbsgwqymgbdykjrclamjgcbhlpkkszgy ; /usr/bin/python3'
Dec 05 09:42:16 compute-0 sudo[71741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:20 compute-0 sudo[71741]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:21 compute-0 sudo[71814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uersicpvgbepvcmdfqronzwrxvogqxfd ; /usr/bin/python3'
Dec 05 09:42:21 compute-0 sudo[71814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:21 compute-0 sudo[71814]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:21 compute-0 sudo[71916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxnxdjyxeecqhxtdfyhtufxczuwjwpzb ; /usr/bin/python3'
Dec 05 09:42:21 compute-0 sudo[71916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:21 compute-0 sudo[71916]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:22 compute-0 sudo[71989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnplsafltsqfpxujouxcmmawdghfkgci ; /usr/bin/python3'
Dec 05 09:42:22 compute-0 sudo[71989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:22 compute-0 chronyd[58518]: Selected source 23.133.168.246 (pool.ntp.org)
Dec 05 09:42:22 compute-0 sudo[71989]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:22 compute-0 sudo[72039]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yftezwuiftjylylighshhdbsdrqnqgli ; /usr/bin/python3'
Dec 05 09:42:22 compute-0 sudo[72039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:23 compute-0 python3[72041]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:42:24 compute-0 sudo[72039]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:24 compute-0 sudo[72134]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfjlthnkqkiicxokjyamkfjfnhcfhxdz ; /usr/bin/python3'
Dec 05 09:42:24 compute-0 sudo[72134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:25 compute-0 python3[72136]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 09:42:26 compute-0 sudo[72134]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:27 compute-0 sudo[72161]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mggtwaloivartmksaqsbfcffwslntrrg ; /usr/bin/python3'
Dec 05 09:42:27 compute-0 sudo[72161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:27 compute-0 python3[72163]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 09:42:27 compute-0 sudo[72161]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:27 compute-0 sudo[72187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbwvtmjmfvqrvdjskxqmougblfqoauvf ; /usr/bin/python3'
Dec 05 09:42:27 compute-0 sudo[72187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:27 compute-0 python3[72189]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:42:27 compute-0 kernel: loop: module loaded
Dec 05 09:42:27 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 05 09:42:27 compute-0 sudo[72187]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:28 compute-0 sudo[72222]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utmgfvdthabugtjsauvagfswctitnwlm ; /usr/bin/python3'
Dec 05 09:42:28 compute-0 sudo[72222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:28 compute-0 python3[72224]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:42:28 compute-0 lvm[72227]: PV /dev/loop3 not used.
Dec 05 09:42:28 compute-0 lvm[72229]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:42:28 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 05 09:42:28 compute-0 lvm[72237]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 05 09:42:28 compute-0 lvm[72239]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:42:28 compute-0 lvm[72239]: VG ceph_vg0 finished
Dec 05 09:42:28 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 05 09:42:28 compute-0 sudo[72222]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:29 compute-0 sudo[72315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffplpltolwrcezwxlquljacigrssobgh ; /usr/bin/python3'
Dec 05 09:42:29 compute-0 sudo[72315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:29 compute-0 python3[72317]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:42:29 compute-0 sudo[72315]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:29 compute-0 sudo[72388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmmaaxjdlvuwkewsjkinmqnjaefnesvk ; /usr/bin/python3'
Dec 05 09:42:29 compute-0 sudo[72388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:29 compute-0 python3[72390]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927748.9488597-36868-633028779521/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:42:29 compute-0 sudo[72388]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:30 compute-0 sudo[72438]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suooeqvslltsouqrskgrcgtedtnkfhgv ; /usr/bin/python3'
Dec 05 09:42:30 compute-0 sudo[72438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:30 compute-0 python3[72440]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:42:30 compute-0 systemd[1]: Reloading.
Dec 05 09:42:30 compute-0 systemd-rc-local-generator[72467]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:42:30 compute-0 systemd-sysv-generator[72471]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:42:30 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 05 09:42:30 compute-0 bash[72480]: /dev/loop3: [64513]:4327941 (/var/lib/ceph-osd-0.img)
Dec 05 09:42:30 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 05 09:42:30 compute-0 sudo[72438]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:30 compute-0 lvm[72483]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:42:30 compute-0 lvm[72483]: VG ceph_vg0 finished
Dec 05 09:42:33 compute-0 python3[72507]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:42:36 compute-0 sudo[72598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsjuadmzncfdllawnzewpzfsrhfzmtqn ; /usr/bin/python3'
Dec 05 09:42:36 compute-0 sudo[72598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:36 compute-0 python3[72600]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 09:42:39 compute-0 sudo[72598]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:39 compute-0 sudo[72655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koiicdicmbzcmkmkkogzjmotglofcvjq ; /usr/bin/python3'
Dec 05 09:42:39 compute-0 sudo[72655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:39 compute-0 python3[72657]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 09:42:42 compute-0 groupadd[72668]: group added to /etc/group: name=cephadm, GID=992
Dec 05 09:42:42 compute-0 groupadd[72668]: group added to /etc/gshadow: name=cephadm
Dec 05 09:42:43 compute-0 groupadd[72668]: new group: name=cephadm, GID=992
Dec 05 09:42:43 compute-0 useradd[72675]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 05 09:42:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:42:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 09:42:44 compute-0 sudo[72655]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:44 compute-0 sudo[72770]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozitkhearbbvlcvavdsdfsjelzdznlyz ; /usr/bin/python3'
Dec 05 09:42:44 compute-0 sudo[72770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:44 compute-0 python3[72772]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 09:42:44 compute-0 sudo[72770]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 09:42:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 09:42:44 compute-0 systemd[1]: run-r5174a07c149648b5825dce69083ea582.service: Deactivated successfully.
Dec 05 09:42:44 compute-0 sudo[72799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uljaogvrulilqrqyvjorzdtwsvltmfun ; /usr/bin/python3'
Dec 05 09:42:44 compute-0 sudo[72799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:44 compute-0 python3[72801]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:42:45 compute-0 sudo[72799]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:45 compute-0 sudo[72861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjunuesrsvufppyvexrgwhuycaygphct ; /usr/bin/python3'
Dec 05 09:42:45 compute-0 sudo[72861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:46 compute-0 python3[72863]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:42:46 compute-0 sudo[72861]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:46 compute-0 sudo[72887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etfpikkxybxqfvmntjdpuwtwjepeefio ; /usr/bin/python3'
Dec 05 09:42:46 compute-0 sudo[72887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:42:46 compute-0 python3[72889]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:42:46 compute-0 sudo[72887]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:47 compute-0 sudo[72965]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iejiphcufgabpcyhrmsgjqllhapcrkso ; /usr/bin/python3'
Dec 05 09:42:47 compute-0 sudo[72965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:47 compute-0 python3[72967]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:42:47 compute-0 sudo[72965]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:47 compute-0 sudo[73038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdabnsilmfbiqdxrsimmaljokwmobknq ; /usr/bin/python3'
Dec 05 09:42:47 compute-0 sudo[73038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:47 compute-0 python3[73040]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927766.8895776-37061-78960714823490/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:42:47 compute-0 sudo[73038]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:48 compute-0 sudo[73140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyxahltkuemwedqyisblgtwmjrgwxkeq ; /usr/bin/python3'
Dec 05 09:42:48 compute-0 sudo[73140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:48 compute-0 python3[73142]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:42:48 compute-0 sudo[73140]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:48 compute-0 sudo[73213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdjmwgcbwehrrkvsmlpibmafllmehduy ; /usr/bin/python3'
Dec 05 09:42:48 compute-0 sudo[73213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:48 compute-0 python3[73215]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927768.1220381-37079-257980218134917/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:42:48 compute-0 sudo[73213]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:49 compute-0 sudo[73263]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tprszpeygqmdxuyimqzaodlpgiwngfkp ; /usr/bin/python3'
Dec 05 09:42:49 compute-0 sudo[73263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:49 compute-0 python3[73265]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 09:42:49 compute-0 sudo[73263]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:49 compute-0 sudo[73291]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vopcrkjdjskaixiocscaaqszblldwidd ; /usr/bin/python3'
Dec 05 09:42:49 compute-0 sudo[73291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:49 compute-0 python3[73293]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 09:42:49 compute-0 sudo[73291]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:49 compute-0 sudo[73319]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwqisvtvwctrpncagcueqstoupuaurqx ; /usr/bin/python3'
Dec 05 09:42:49 compute-0 sudo[73319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:49 compute-0 python3[73321]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 09:42:49 compute-0 sudo[73319]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:50 compute-0 sudo[73347]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsgoiabgdvfdvfpchisfpxlibayfehdm ; /usr/bin/python3'
Dec 05 09:42:50 compute-0 sudo[73347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:50 compute-0 python3[73349]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:42:50 compute-0 sshd-session[73353]: Accepted publickey for ceph-admin from 192.168.122.100 port 50116 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:42:50 compute-0 systemd-logind[789]: New session 19 of user ceph-admin.
Dec 05 09:42:50 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 05 09:42:50 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 05 09:42:50 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 05 09:42:50 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 05 09:42:50 compute-0 systemd[73357]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:42:50 compute-0 systemd[73357]: Queued start job for default target Main User Target.
Dec 05 09:42:50 compute-0 systemd[73357]: Created slice User Application Slice.
Dec 05 09:42:50 compute-0 systemd[73357]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:42:50 compute-0 systemd[73357]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 09:42:50 compute-0 systemd[73357]: Reached target Paths.
Dec 05 09:42:50 compute-0 systemd[73357]: Reached target Timers.
Dec 05 09:42:50 compute-0 systemd[73357]: Starting D-Bus User Message Bus Socket...
Dec 05 09:42:50 compute-0 systemd[73357]: Starting Create User's Volatile Files and Directories...
Dec 05 09:42:50 compute-0 systemd[73357]: Listening on D-Bus User Message Bus Socket.
Dec 05 09:42:50 compute-0 systemd[73357]: Reached target Sockets.
Dec 05 09:42:50 compute-0 systemd[73357]: Finished Create User's Volatile Files and Directories.
Dec 05 09:42:50 compute-0 systemd[73357]: Reached target Basic System.
Dec 05 09:42:50 compute-0 systemd[73357]: Reached target Main User Target.
Dec 05 09:42:50 compute-0 systemd[73357]: Startup finished in 122ms.
Dec 05 09:42:50 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 05 09:42:50 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Dec 05 09:42:50 compute-0 sshd-session[73353]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:42:50 compute-0 sudo[73373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 05 09:42:50 compute-0 sudo[73373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:42:50 compute-0 sudo[73373]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:50 compute-0 sshd-session[73372]: Received disconnect from 192.168.122.100 port 50116:11: disconnected by user
Dec 05 09:42:50 compute-0 sshd-session[73372]: Disconnected from user ceph-admin 192.168.122.100 port 50116
Dec 05 09:42:50 compute-0 sshd-session[73353]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:42:50 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 05 09:42:50 compute-0 systemd-logind[789]: Session 19 logged out. Waiting for processes to exit.
Dec 05 09:42:50 compute-0 systemd-logind[789]: Removed session 19.
Dec 05 09:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:42:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat166962061-lower\x2dmapped.mount: Deactivated successfully.
Dec 05 09:43:01 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 05 09:43:01 compute-0 systemd[73357]: Activating special unit Exit the Session...
Dec 05 09:43:01 compute-0 systemd[73357]: Stopped target Main User Target.
Dec 05 09:43:01 compute-0 systemd[73357]: Stopped target Basic System.
Dec 05 09:43:01 compute-0 systemd[73357]: Stopped target Paths.
Dec 05 09:43:01 compute-0 systemd[73357]: Stopped target Sockets.
Dec 05 09:43:01 compute-0 systemd[73357]: Stopped target Timers.
Dec 05 09:43:01 compute-0 systemd[73357]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:43:01 compute-0 systemd[73357]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 09:43:01 compute-0 systemd[73357]: Closed D-Bus User Message Bus Socket.
Dec 05 09:43:01 compute-0 systemd[73357]: Stopped Create User's Volatile Files and Directories.
Dec 05 09:43:01 compute-0 systemd[73357]: Removed slice User Application Slice.
Dec 05 09:43:01 compute-0 systemd[73357]: Reached target Shutdown.
Dec 05 09:43:01 compute-0 systemd[73357]: Finished Exit the Session.
Dec 05 09:43:01 compute-0 systemd[73357]: Reached target Exit the Session.
Dec 05 09:43:01 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 05 09:43:01 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 05 09:43:01 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 05 09:43:01 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 05 09:43:01 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 05 09:43:01 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 05 09:43:01 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 05 09:43:14 compute-0 podman[73450]: 2025-12-05 09:43:14.791640488 +0000 UTC m=+23.529433923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:43:14 compute-0 podman[73542]: 2025-12-05 09:43:14.895097322 +0000 UTC m=+0.064755425 container create 64c04480930856f93c47e12f24201892b657be4bd1e838aa98c0d4cfd963209d (image=quay.io/ceph/ceph:v19, name=peaceful_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 09:43:14 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 05 09:43:14 compute-0 systemd[1]: Started libpod-conmon-64c04480930856f93c47e12f24201892b657be4bd1e838aa98c0d4cfd963209d.scope.
Dec 05 09:43:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:14 compute-0 podman[73542]: 2025-12-05 09:43:14.866213501 +0000 UTC m=+0.035871644 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:15 compute-0 podman[73542]: 2025-12-05 09:43:15.007790888 +0000 UTC m=+0.177449061 container init 64c04480930856f93c47e12f24201892b657be4bd1e838aa98c0d4cfd963209d (image=quay.io/ceph/ceph:v19, name=peaceful_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:15 compute-0 podman[73542]: 2025-12-05 09:43:15.017300899 +0000 UTC m=+0.186958982 container start 64c04480930856f93c47e12f24201892b657be4bd1e838aa98c0d4cfd963209d (image=quay.io/ceph/ceph:v19, name=peaceful_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:43:15 compute-0 podman[73542]: 2025-12-05 09:43:15.020773524 +0000 UTC m=+0.190431747 container attach 64c04480930856f93c47e12f24201892b657be4bd1e838aa98c0d4cfd963209d (image=quay.io/ceph/ceph:v19, name=peaceful_feistel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:43:15 compute-0 peaceful_feistel[73558]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec 05 09:43:15 compute-0 systemd[1]: libpod-64c04480930856f93c47e12f24201892b657be4bd1e838aa98c0d4cfd963209d.scope: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73542]: 2025-12-05 09:43:15.137624594 +0000 UTC m=+0.307282687 container died 64c04480930856f93c47e12f24201892b657be4bd1e838aa98c0d4cfd963209d (image=quay.io/ceph/ceph:v19, name=peaceful_feistel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 09:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5053be189def82853501f6bdc989d6bd6e63a390b53bb598f566784bb62f95c8-merged.mount: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73542]: 2025-12-05 09:43:15.190128061 +0000 UTC m=+0.359786174 container remove 64c04480930856f93c47e12f24201892b657be4bd1e838aa98c0d4cfd963209d (image=quay.io/ceph/ceph:v19, name=peaceful_feistel, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 05 09:43:15 compute-0 systemd[1]: libpod-conmon-64c04480930856f93c47e12f24201892b657be4bd1e838aa98c0d4cfd963209d.scope: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73573]: 2025-12-05 09:43:15.26419933 +0000 UTC m=+0.046592907 container create 3454da8d0924946c9b7fd375dde3f7b180362a58e9079520533ead86d49388d7 (image=quay.io/ceph/ceph:v19, name=romantic_tu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 05 09:43:15 compute-0 systemd[1]: Started libpod-conmon-3454da8d0924946c9b7fd375dde3f7b180362a58e9079520533ead86d49388d7.scope.
Dec 05 09:43:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:15 compute-0 podman[73573]: 2025-12-05 09:43:15.331104023 +0000 UTC m=+0.113497600 container init 3454da8d0924946c9b7fd375dde3f7b180362a58e9079520533ead86d49388d7 (image=quay.io/ceph/ceph:v19, name=romantic_tu, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:15 compute-0 podman[73573]: 2025-12-05 09:43:15.338756981 +0000 UTC m=+0.121150558 container start 3454da8d0924946c9b7fd375dde3f7b180362a58e9079520533ead86d49388d7 (image=quay.io/ceph/ceph:v19, name=romantic_tu, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:15 compute-0 podman[73573]: 2025-12-05 09:43:15.245559269 +0000 UTC m=+0.027952866 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:15 compute-0 romantic_tu[73590]: 167 167
Dec 05 09:43:15 compute-0 podman[73573]: 2025-12-05 09:43:15.342362581 +0000 UTC m=+0.124756178 container attach 3454da8d0924946c9b7fd375dde3f7b180362a58e9079520533ead86d49388d7 (image=quay.io/ceph/ceph:v19, name=romantic_tu, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 05 09:43:15 compute-0 systemd[1]: libpod-3454da8d0924946c9b7fd375dde3f7b180362a58e9079520533ead86d49388d7.scope: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73573]: 2025-12-05 09:43:15.344453417 +0000 UTC m=+0.126847004 container died 3454da8d0924946c9b7fd375dde3f7b180362a58e9079520533ead86d49388d7 (image=quay.io/ceph/ceph:v19, name=romantic_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 05 09:43:15 compute-0 podman[73573]: 2025-12-05 09:43:15.382346866 +0000 UTC m=+0.164740433 container remove 3454da8d0924946c9b7fd375dde3f7b180362a58e9079520533ead86d49388d7 (image=quay.io/ceph/ceph:v19, name=romantic_tu, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:43:15 compute-0 systemd[1]: libpod-conmon-3454da8d0924946c9b7fd375dde3f7b180362a58e9079520533ead86d49388d7.scope: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73606]: 2025-12-05 09:43:15.447442128 +0000 UTC m=+0.042952197 container create 1bc6cccc7c3889919ed01a1246f530fbeb75e63524279d70b6c6271d98938da9 (image=quay.io/ceph/ceph:v19, name=lucid_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:15 compute-0 systemd[1]: Started libpod-conmon-1bc6cccc7c3889919ed01a1246f530fbeb75e63524279d70b6c6271d98938da9.scope.
Dec 05 09:43:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:15 compute-0 podman[73606]: 2025-12-05 09:43:15.50592659 +0000 UTC m=+0.101436639 container init 1bc6cccc7c3889919ed01a1246f530fbeb75e63524279d70b6c6271d98938da9 (image=quay.io/ceph/ceph:v19, name=lucid_pare, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:15 compute-0 podman[73606]: 2025-12-05 09:43:15.510722812 +0000 UTC m=+0.106232861 container start 1bc6cccc7c3889919ed01a1246f530fbeb75e63524279d70b6c6271d98938da9 (image=quay.io/ceph/ceph:v19, name=lucid_pare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 09:43:15 compute-0 podman[73606]: 2025-12-05 09:43:15.513750035 +0000 UTC m=+0.109260114 container attach 1bc6cccc7c3889919ed01a1246f530fbeb75e63524279d70b6c6271d98938da9 (image=quay.io/ceph/ceph:v19, name=lucid_pare, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:15 compute-0 podman[73606]: 2025-12-05 09:43:15.429253801 +0000 UTC m=+0.024763880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:15 compute-0 lucid_pare[73623]: AQAzqTJptWySHxAALzENkFAXuywO6EZXGmfkwA==
Dec 05 09:43:15 compute-0 systemd[1]: libpod-1bc6cccc7c3889919ed01a1246f530fbeb75e63524279d70b6c6271d98938da9.scope: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73606]: 2025-12-05 09:43:15.532739785 +0000 UTC m=+0.128249854 container died 1bc6cccc7c3889919ed01a1246f530fbeb75e63524279d70b6c6271d98938da9 (image=quay.io/ceph/ceph:v19, name=lucid_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:43:15 compute-0 podman[73606]: 2025-12-05 09:43:15.563374654 +0000 UTC m=+0.158884713 container remove 1bc6cccc7c3889919ed01a1246f530fbeb75e63524279d70b6c6271d98938da9 (image=quay.io/ceph/ceph:v19, name=lucid_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 05 09:43:15 compute-0 systemd[1]: libpod-conmon-1bc6cccc7c3889919ed01a1246f530fbeb75e63524279d70b6c6271d98938da9.scope: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73642]: 2025-12-05 09:43:15.625329631 +0000 UTC m=+0.040768758 container create c60d73a8ba68b47f49a298e228028e5e863b63b82a03fb0c900ce2cbbbd092af (image=quay.io/ceph/ceph:v19, name=strange_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 09:43:15 compute-0 systemd[1]: Started libpod-conmon-c60d73a8ba68b47f49a298e228028e5e863b63b82a03fb0c900ce2cbbbd092af.scope.
Dec 05 09:43:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:15 compute-0 podman[73642]: 2025-12-05 09:43:15.681244622 +0000 UTC m=+0.096683749 container init c60d73a8ba68b47f49a298e228028e5e863b63b82a03fb0c900ce2cbbbd092af (image=quay.io/ceph/ceph:v19, name=strange_napier, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 09:43:15 compute-0 podman[73642]: 2025-12-05 09:43:15.686777384 +0000 UTC m=+0.102216501 container start c60d73a8ba68b47f49a298e228028e5e863b63b82a03fb0c900ce2cbbbd092af (image=quay.io/ceph/ceph:v19, name=strange_napier, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:43:15 compute-0 podman[73642]: 2025-12-05 09:43:15.691267976 +0000 UTC m=+0.106707113 container attach c60d73a8ba68b47f49a298e228028e5e863b63b82a03fb0c900ce2cbbbd092af (image=quay.io/ceph/ceph:v19, name=strange_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 09:43:15 compute-0 podman[73642]: 2025-12-05 09:43:15.607879252 +0000 UTC m=+0.023318389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:15 compute-0 strange_napier[73659]: AQAzqTJpHBM7KhAAZwqalOWSgOpGPccBz6afAg==
Dec 05 09:43:15 compute-0 systemd[1]: libpod-c60d73a8ba68b47f49a298e228028e5e863b63b82a03fb0c900ce2cbbbd092af.scope: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73642]: 2025-12-05 09:43:15.711546491 +0000 UTC m=+0.126985608 container died c60d73a8ba68b47f49a298e228028e5e863b63b82a03fb0c900ce2cbbbd092af (image=quay.io/ceph/ceph:v19, name=strange_napier, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:15 compute-0 podman[73642]: 2025-12-05 09:43:15.752208126 +0000 UTC m=+0.167647263 container remove c60d73a8ba68b47f49a298e228028e5e863b63b82a03fb0c900ce2cbbbd092af (image=quay.io/ceph/ceph:v19, name=strange_napier, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:43:15 compute-0 systemd[1]: libpod-conmon-c60d73a8ba68b47f49a298e228028e5e863b63b82a03fb0c900ce2cbbbd092af.scope: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73677]: 2025-12-05 09:43:15.81332921 +0000 UTC m=+0.042271889 container create f59f82e9064aeeada8e57e0d18f5531364c99c142fc0093444de065c6e7c3d40 (image=quay.io/ceph/ceph:v19, name=peaceful_banach, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:43:15 compute-0 systemd[1]: Started libpod-conmon-f59f82e9064aeeada8e57e0d18f5531364c99c142fc0093444de065c6e7c3d40.scope.
Dec 05 09:43:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:15 compute-0 podman[73677]: 2025-12-05 09:43:15.865631061 +0000 UTC m=+0.094573760 container init f59f82e9064aeeada8e57e0d18f5531364c99c142fc0093444de065c6e7c3d40 (image=quay.io/ceph/ceph:v19, name=peaceful_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 09:43:15 compute-0 podman[73677]: 2025-12-05 09:43:15.870639429 +0000 UTC m=+0.099582128 container start f59f82e9064aeeada8e57e0d18f5531364c99c142fc0093444de065c6e7c3d40 (image=quay.io/ceph/ceph:v19, name=peaceful_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 09:43:15 compute-0 podman[73677]: 2025-12-05 09:43:15.875145323 +0000 UTC m=+0.104088002 container attach f59f82e9064aeeada8e57e0d18f5531364c99c142fc0093444de065c6e7c3d40 (image=quay.io/ceph/ceph:v19, name=peaceful_banach, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Dec 05 09:43:15 compute-0 peaceful_banach[73694]: AQAzqTJp5MryNBAA0F2+qQKrqU2Ynfwy6k7Hog==
Dec 05 09:43:15 compute-0 podman[73677]: 2025-12-05 09:43:15.793493566 +0000 UTC m=+0.022436265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:15 compute-0 systemd[1]: libpod-f59f82e9064aeeada8e57e0d18f5531364c99c142fc0093444de065c6e7c3d40.scope: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73677]: 2025-12-05 09:43:15.893107284 +0000 UTC m=+0.122049983 container died f59f82e9064aeeada8e57e0d18f5531364c99c142fc0093444de065c6e7c3d40 (image=quay.io/ceph/ceph:v19, name=peaceful_banach, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 09:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e382c3dd862b2ede1a4ca28e796d27df795a82ccb4093785e7387680e5aee87-merged.mount: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73677]: 2025-12-05 09:43:15.930257322 +0000 UTC m=+0.159200001 container remove f59f82e9064aeeada8e57e0d18f5531364c99c142fc0093444de065c6e7c3d40 (image=quay.io/ceph/ceph:v19, name=peaceful_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 09:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:43:15 compute-0 systemd[1]: libpod-conmon-f59f82e9064aeeada8e57e0d18f5531364c99c142fc0093444de065c6e7c3d40.scope: Deactivated successfully.
Dec 05 09:43:15 compute-0 podman[73711]: 2025-12-05 09:43:15.986004868 +0000 UTC m=+0.036732327 container create 7fa2936dfe5481b530d1fda237091d3607502d914a97356d5e6aedc6ebdf7aca (image=quay.io/ceph/ceph:v19, name=infallible_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 09:43:16 compute-0 systemd[1]: Started libpod-conmon-7fa2936dfe5481b530d1fda237091d3607502d914a97356d5e6aedc6ebdf7aca.scope.
Dec 05 09:43:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8c80b28710453a08ae0d92818215873fe8b0fa6c2a233ba3f0310296e8da9a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:16 compute-0 podman[73711]: 2025-12-05 09:43:16.052277224 +0000 UTC m=+0.103004713 container init 7fa2936dfe5481b530d1fda237091d3607502d914a97356d5e6aedc6ebdf7aca (image=quay.io/ceph/ceph:v19, name=infallible_mcclintock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 09:43:16 compute-0 podman[73711]: 2025-12-05 09:43:16.059273876 +0000 UTC m=+0.110001335 container start 7fa2936dfe5481b530d1fda237091d3607502d914a97356d5e6aedc6ebdf7aca (image=quay.io/ceph/ceph:v19, name=infallible_mcclintock, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 05 09:43:16 compute-0 podman[73711]: 2025-12-05 09:43:16.06271318 +0000 UTC m=+0.113440639 container attach 7fa2936dfe5481b530d1fda237091d3607502d914a97356d5e6aedc6ebdf7aca (image=quay.io/ceph/ceph:v19, name=infallible_mcclintock, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 09:43:16 compute-0 podman[73711]: 2025-12-05 09:43:15.970737781 +0000 UTC m=+0.021465260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:16 compute-0 infallible_mcclintock[73728]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 05 09:43:16 compute-0 infallible_mcclintock[73728]: setting min_mon_release = quincy
Dec 05 09:43:16 compute-0 infallible_mcclintock[73728]: /usr/bin/monmaptool: set fsid to 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:16 compute-0 infallible_mcclintock[73728]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 05 09:43:16 compute-0 systemd[1]: libpod-7fa2936dfe5481b530d1fda237091d3607502d914a97356d5e6aedc6ebdf7aca.scope: Deactivated successfully.
Dec 05 09:43:16 compute-0 podman[73711]: 2025-12-05 09:43:16.092014162 +0000 UTC m=+0.142741621 container died 7fa2936dfe5481b530d1fda237091d3607502d914a97356d5e6aedc6ebdf7aca (image=quay.io/ceph/ceph:v19, name=infallible_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 09:43:16 compute-0 podman[73711]: 2025-12-05 09:43:16.133396955 +0000 UTC m=+0.184124414 container remove 7fa2936dfe5481b530d1fda237091d3607502d914a97356d5e6aedc6ebdf7aca (image=quay.io/ceph/ceph:v19, name=infallible_mcclintock, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:43:16 compute-0 systemd[1]: libpod-conmon-7fa2936dfe5481b530d1fda237091d3607502d914a97356d5e6aedc6ebdf7aca.scope: Deactivated successfully.
Dec 05 09:43:16 compute-0 podman[73746]: 2025-12-05 09:43:16.206376764 +0000 UTC m=+0.048250412 container create 8267bc6d57c05920272a67f8c8d97e161584ecb0b4e9d5b363328da64db179d3 (image=quay.io/ceph/ceph:v19, name=bold_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Dec 05 09:43:16 compute-0 systemd[1]: Started libpod-conmon-8267bc6d57c05920272a67f8c8d97e161584ecb0b4e9d5b363328da64db179d3.scope.
Dec 05 09:43:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44de65abcea29f385a5ed06bb16f6f9fa675d2a0dbe863be57c532399592e790/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44de65abcea29f385a5ed06bb16f6f9fa675d2a0dbe863be57c532399592e790/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44de65abcea29f385a5ed06bb16f6f9fa675d2a0dbe863be57c532399592e790/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44de65abcea29f385a5ed06bb16f6f9fa675d2a0dbe863be57c532399592e790/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:16 compute-0 podman[73746]: 2025-12-05 09:43:16.278573761 +0000 UTC m=+0.120447439 container init 8267bc6d57c05920272a67f8c8d97e161584ecb0b4e9d5b363328da64db179d3 (image=quay.io/ceph/ceph:v19, name=bold_beaver, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:43:16 compute-0 podman[73746]: 2025-12-05 09:43:16.185282997 +0000 UTC m=+0.027156675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:16 compute-0 podman[73746]: 2025-12-05 09:43:16.284992957 +0000 UTC m=+0.126866625 container start 8267bc6d57c05920272a67f8c8d97e161584ecb0b4e9d5b363328da64db179d3 (image=quay.io/ceph/ceph:v19, name=bold_beaver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:16 compute-0 podman[73746]: 2025-12-05 09:43:16.288810952 +0000 UTC m=+0.130684610 container attach 8267bc6d57c05920272a67f8c8d97e161584ecb0b4e9d5b363328da64db179d3 (image=quay.io/ceph/ceph:v19, name=bold_beaver, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 09:43:16 compute-0 systemd[1]: libpod-8267bc6d57c05920272a67f8c8d97e161584ecb0b4e9d5b363328da64db179d3.scope: Deactivated successfully.
Dec 05 09:43:16 compute-0 podman[73746]: 2025-12-05 09:43:16.392564494 +0000 UTC m=+0.234438152 container died 8267bc6d57c05920272a67f8c8d97e161584ecb0b4e9d5b363328da64db179d3 (image=quay.io/ceph/ceph:v19, name=bold_beaver, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:43:16 compute-0 podman[73746]: 2025-12-05 09:43:16.430177143 +0000 UTC m=+0.272050801 container remove 8267bc6d57c05920272a67f8c8d97e161584ecb0b4e9d5b363328da64db179d3 (image=quay.io/ceph/ceph:v19, name=bold_beaver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 09:43:16 compute-0 systemd[1]: libpod-conmon-8267bc6d57c05920272a67f8c8d97e161584ecb0b4e9d5b363328da64db179d3.scope: Deactivated successfully.
Dec 05 09:43:16 compute-0 systemd[1]: Reloading.
Dec 05 09:43:16 compute-0 systemd-rc-local-generator[73832]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:43:16 compute-0 systemd-sysv-generator[73835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:43:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:43:16 compute-0 systemd[1]: Reloading.
Dec 05 09:43:16 compute-0 systemd-rc-local-generator[73868]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:43:16 compute-0 systemd-sysv-generator[73872]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:43:16 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 05 09:43:16 compute-0 systemd[1]: Reloading.
Dec 05 09:43:17 compute-0 systemd-sysv-generator[73910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:43:17 compute-0 systemd-rc-local-generator[73906]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:43:17 compute-0 systemd[1]: Reached target Ceph cluster 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:43:17 compute-0 systemd[1]: Reloading.
Dec 05 09:43:17 compute-0 systemd-sysv-generator[73946]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:43:17 compute-0 systemd-rc-local-generator[73941]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:43:17 compute-0 systemd[1]: Reloading.
Dec 05 09:43:17 compute-0 systemd-rc-local-generator[73980]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:43:17 compute-0 systemd-sysv-generator[73984]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:43:17 compute-0 systemd[1]: Created slice Slice /system/ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:43:17 compute-0 systemd[1]: Reached target System Time Set.
Dec 05 09:43:17 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 05 09:43:17 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:43:18 compute-0 podman[74040]: 2025-12-05 09:43:18.02053259 +0000 UTC m=+0.025018647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:18 compute-0 podman[74040]: 2025-12-05 09:43:18.916859418 +0000 UTC m=+0.921345445 container create d00d8ac134224cb6adb368ef4083c0eee62a5d76951110de472951a18b85bb6e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eeb29657863c2abdb1f7132627b03dc935a45a64e61e8dc7154bb51a3761575/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eeb29657863c2abdb1f7132627b03dc935a45a64e61e8dc7154bb51a3761575/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eeb29657863c2abdb1f7132627b03dc935a45a64e61e8dc7154bb51a3761575/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eeb29657863c2abdb1f7132627b03dc935a45a64e61e8dc7154bb51a3761575/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:20 compute-0 podman[74040]: 2025-12-05 09:43:20.132694345 +0000 UTC m=+2.137180392 container init d00d8ac134224cb6adb368ef4083c0eee62a5d76951110de472951a18b85bb6e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 09:43:20 compute-0 podman[74040]: 2025-12-05 09:43:20.139322066 +0000 UTC m=+2.143808093 container start d00d8ac134224cb6adb368ef4083c0eee62a5d76951110de472951a18b85bb6e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:43:20 compute-0 ceph-mon[74060]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 09:43:20 compute-0 ceph-mon[74060]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 05 09:43:20 compute-0 ceph-mon[74060]: pidfile_write: ignore empty --pid-file
Dec 05 09:43:20 compute-0 ceph-mon[74060]: load: jerasure load: lrc 
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: RocksDB version: 7.9.2
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Git sha 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: DB SUMMARY
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: DB Session ID:  7TAHGEBCXUUAIX4TLRSK
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: CURRENT file:  CURRENT
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                         Options.error_if_exists: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                       Options.create_if_missing: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                                     Options.env: 0x55a7bf65ac20
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                                Options.info_log: 0x55a7c13ded60
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                              Options.statistics: (nil)
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                               Options.use_fsync: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                              Options.db_log_dir: 
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                                 Options.wal_dir: 
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                    Options.write_buffer_manager: 0x55a7c13e3900
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                  Options.unordered_write: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                               Options.row_cache: None
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                              Options.wal_filter: None
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.two_write_queues: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.wal_compression: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.atomic_flush: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.max_background_jobs: 2
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.max_background_compactions: -1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.max_subcompactions: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.max_total_wal_size: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                          Options.max_open_files: -1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:       Options.compaction_readahead_size: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Compression algorithms supported:
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         kZSTD supported: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         kXpressCompression supported: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         kBZip2Compression supported: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         kLZ4Compression supported: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         kZlibCompression supported: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         kSnappyCompression supported: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:           Options.merge_operator: 
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:        Options.compaction_filter: None
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a7c13de500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a7c1403350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:        Options.write_buffer_size: 33554432
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:  Options.max_write_buffer_number: 2
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:          Options.compression: NoCompression
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.num_levels: 7
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0c84246f-bc02-4e85-8436-bed956adac07
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927800179529, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 05 09:43:20 compute-0 ceph-mon[74060]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 05 09:43:21 compute-0 ceph-mon[74060]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927801322063, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "7TAHGEBCXUUAIX4TLRSK", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:43:21 compute-0 ceph-mon[74060]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927801322292, "job": 1, "event": "recovery_finished"}
Dec 05 09:43:21 compute-0 ceph-mon[74060]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 05 09:43:21 compute-0 bash[74040]: d00d8ac134224cb6adb368ef4083c0eee62a5d76951110de472951a18b85bb6e
Dec 05 09:43:21 compute-0 systemd[1]: Started Ceph mon.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:43:21 compute-0 ceph-mon[74060]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:43:21 compute-0 ceph-mon[74060]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a7c1404e00
Dec 05 09:43:21 compute-0 ceph-mon[74060]: rocksdb: DB pointer 0x55a7c150e000
Dec 05 09:43:21 compute-0 ceph-mon[74060]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:43:21 compute-0 ceph-mon[74060]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1.2 total, 1.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.14              0.00         1    1.143       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.14              0.00         1    1.143       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.14              0.00         1    1.143       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      1.14              0.00         1    1.143       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.2 total, 1.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a7c1403350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 05 09:43:21 compute-0 ceph-mon[74060]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@-1(???) e0 preinit fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 05 09:43:21 compute-0 ceph-mon[74060]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 05 09:43:21 compute-0 ceph-mon[74060]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [DBG] : fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [DBG] : last_changed 2025-12-05T09:43:16.088283+0000
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [DBG] : created 2025-12-05T09:43:16.088283+0000
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).mds e1 new map
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-12-05T09:43:21:401410+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [DBG] : fsmap 
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mkfs 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 09:43:21 compute-0 podman[74088]: 2025-12-05 09:43:21.440566737 +0000 UTC m=+0.051635336 container create 24f06abb09ed1dedbea5a14b12bd1aa3f729b2e6afbdd18a0c8619c84759e33e (image=quay.io/ceph/ceph:v19, name=inspiring_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 09:43:21 compute-0 systemd[1]: Started libpod-conmon-24f06abb09ed1dedbea5a14b12bd1aa3f729b2e6afbdd18a0c8619c84759e33e.scope.
Dec 05 09:43:21 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:21 compute-0 podman[74088]: 2025-12-05 09:43:21.415035717 +0000 UTC m=+0.026104356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6f11929a4285c04baf45831dbf75bd7665a36911505b06cbc16c0cfd93ce7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6f11929a4285c04baf45831dbf75bd7665a36911505b06cbc16c0cfd93ce7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6f11929a4285c04baf45831dbf75bd7665a36911505b06cbc16c0cfd93ce7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:21 compute-0 podman[74088]: 2025-12-05 09:43:21.531784595 +0000 UTC m=+0.142853204 container init 24f06abb09ed1dedbea5a14b12bd1aa3f729b2e6afbdd18a0c8619c84759e33e (image=quay.io/ceph/ceph:v19, name=inspiring_lovelace, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:43:21 compute-0 podman[74088]: 2025-12-05 09:43:21.541044478 +0000 UTC m=+0.152113087 container start 24f06abb09ed1dedbea5a14b12bd1aa3f729b2e6afbdd18a0c8619c84759e33e (image=quay.io/ceph/ceph:v19, name=inspiring_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:21 compute-0 podman[74088]: 2025-12-05 09:43:21.545364597 +0000 UTC m=+0.156433236 container attach 24f06abb09ed1dedbea5a14b12bd1aa3f729b2e6afbdd18a0c8619c84759e33e (image=quay.io/ceph/ceph:v19, name=inspiring_lovelace, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 09:43:21 compute-0 ceph-mon[74060]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 05 09:43:21 compute-0 ceph-mon[74060]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639113141' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:   cluster:
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:     id:     3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:     health: HEALTH_OK
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:  
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:   services:
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:     mon: 1 daemons, quorum compute-0 (age 0.380811s)
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:     mgr: no daemons active
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:     osd: 0 osds: 0 up, 0 in
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:  
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:   data:
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:     pools:   0 pools, 0 pgs
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:     objects: 0 objects, 0 B
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:     usage:   0 B used, 0 B / 0 B avail
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:     pgs:     
Dec 05 09:43:21 compute-0 inspiring_lovelace[74114]:  
Dec 05 09:43:21 compute-0 systemd[1]: libpod-24f06abb09ed1dedbea5a14b12bd1aa3f729b2e6afbdd18a0c8619c84759e33e.scope: Deactivated successfully.
Dec 05 09:43:21 compute-0 podman[74088]: 2025-12-05 09:43:21.799254991 +0000 UTC m=+0.410323620 container died 24f06abb09ed1dedbea5a14b12bd1aa3f729b2e6afbdd18a0c8619c84759e33e (image=quay.io/ceph/ceph:v19, name=inspiring_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 09:43:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-60f6f11929a4285c04baf45831dbf75bd7665a36911505b06cbc16c0cfd93ce7-merged.mount: Deactivated successfully.
Dec 05 09:43:21 compute-0 podman[74088]: 2025-12-05 09:43:21.93431983 +0000 UTC m=+0.545388439 container remove 24f06abb09ed1dedbea5a14b12bd1aa3f729b2e6afbdd18a0c8619c84759e33e (image=quay.io/ceph/ceph:v19, name=inspiring_lovelace, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:21 compute-0 systemd[1]: libpod-conmon-24f06abb09ed1dedbea5a14b12bd1aa3f729b2e6afbdd18a0c8619c84759e33e.scope: Deactivated successfully.
Dec 05 09:43:22 compute-0 podman[74156]: 2025-12-05 09:43:22.000630905 +0000 UTC m=+0.044902871 container create cba915a8996c97ca30e11698fa50b96bc59dfb0e4fd67b3db47f48cbb57628ed (image=quay.io/ceph/ceph:v19, name=lucid_brattain, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:43:22 compute-0 systemd[1]: Started libpod-conmon-cba915a8996c97ca30e11698fa50b96bc59dfb0e4fd67b3db47f48cbb57628ed.scope.
Dec 05 09:43:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6a442481895d25a522446cf7eb7ad79378bf640543f687cdafd8414b992499/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6a442481895d25a522446cf7eb7ad79378bf640543f687cdafd8414b992499/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6a442481895d25a522446cf7eb7ad79378bf640543f687cdafd8414b992499/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6a442481895d25a522446cf7eb7ad79378bf640543f687cdafd8414b992499/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:22 compute-0 podman[74156]: 2025-12-05 09:43:21.983586148 +0000 UTC m=+0.027858124 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:22 compute-0 podman[74156]: 2025-12-05 09:43:22.087709351 +0000 UTC m=+0.131981337 container init cba915a8996c97ca30e11698fa50b96bc59dfb0e4fd67b3db47f48cbb57628ed (image=quay.io/ceph/ceph:v19, name=lucid_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 09:43:22 compute-0 podman[74156]: 2025-12-05 09:43:22.095976197 +0000 UTC m=+0.140248143 container start cba915a8996c97ca30e11698fa50b96bc59dfb0e4fd67b3db47f48cbb57628ed (image=quay.io/ceph/ceph:v19, name=lucid_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 09:43:22 compute-0 podman[74156]: 2025-12-05 09:43:22.100057669 +0000 UTC m=+0.144329655 container attach cba915a8996c97ca30e11698fa50b96bc59dfb0e4fd67b3db47f48cbb57628ed (image=quay.io/ceph/ceph:v19, name=lucid_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 05 09:43:22 compute-0 ceph-mon[74060]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 05 09:43:22 compute-0 ceph-mon[74060]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2846987227' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 09:43:22 compute-0 ceph-mon[74060]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2846987227' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 05 09:43:22 compute-0 lucid_brattain[74173]: 
Dec 05 09:43:22 compute-0 lucid_brattain[74173]: [global]
Dec 05 09:43:22 compute-0 lucid_brattain[74173]:         fsid = 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:22 compute-0 lucid_brattain[74173]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 05 09:43:22 compute-0 systemd[1]: libpod-cba915a8996c97ca30e11698fa50b96bc59dfb0e4fd67b3db47f48cbb57628ed.scope: Deactivated successfully.
Dec 05 09:43:22 compute-0 podman[74156]: 2025-12-05 09:43:22.311383236 +0000 UTC m=+0.355655192 container died cba915a8996c97ca30e11698fa50b96bc59dfb0e4fd67b3db47f48cbb57628ed (image=quay.io/ceph/ceph:v19, name=lucid_brattain, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:43:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-be6a442481895d25a522446cf7eb7ad79378bf640543f687cdafd8414b992499-merged.mount: Deactivated successfully.
Dec 05 09:43:22 compute-0 podman[74156]: 2025-12-05 09:43:22.34546347 +0000 UTC m=+0.389735436 container remove cba915a8996c97ca30e11698fa50b96bc59dfb0e4fd67b3db47f48cbb57628ed (image=quay.io/ceph/ceph:v19, name=lucid_brattain, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 09:43:22 compute-0 systemd[1]: libpod-conmon-cba915a8996c97ca30e11698fa50b96bc59dfb0e4fd67b3db47f48cbb57628ed.scope: Deactivated successfully.
Dec 05 09:43:22 compute-0 podman[74211]: 2025-12-05 09:43:22.402443141 +0000 UTC m=+0.038024333 container create 4f3bb359622535e6cd5e7e83027cf59bbcfae0fe58fb4633cfb91394447bc614 (image=quay.io/ceph/ceph:v19, name=hopeful_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:43:22 compute-0 ceph-mon[74060]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 09:43:22 compute-0 ceph-mon[74060]: monmap epoch 1
Dec 05 09:43:22 compute-0 ceph-mon[74060]: fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:22 compute-0 ceph-mon[74060]: last_changed 2025-12-05T09:43:16.088283+0000
Dec 05 09:43:22 compute-0 ceph-mon[74060]: created 2025-12-05T09:43:16.088283+0000
Dec 05 09:43:22 compute-0 ceph-mon[74060]: min_mon_release 19 (squid)
Dec 05 09:43:22 compute-0 ceph-mon[74060]: election_strategy: 1
Dec 05 09:43:22 compute-0 ceph-mon[74060]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 05 09:43:22 compute-0 ceph-mon[74060]: fsmap 
Dec 05 09:43:22 compute-0 ceph-mon[74060]: osdmap e1: 0 total, 0 up, 0 in
Dec 05 09:43:22 compute-0 ceph-mon[74060]: mgrmap e1: no daemons active
Dec 05 09:43:22 compute-0 ceph-mon[74060]: from='client.? 192.168.122.100:0/3639113141' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 09:43:22 compute-0 ceph-mon[74060]: from='client.? 192.168.122.100:0/2846987227' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 09:43:22 compute-0 ceph-mon[74060]: from='client.? 192.168.122.100:0/2846987227' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 05 09:43:22 compute-0 systemd[1]: Started libpod-conmon-4f3bb359622535e6cd5e7e83027cf59bbcfae0fe58fb4633cfb91394447bc614.scope.
Dec 05 09:43:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3a7c74358eab5666dfd031f02c1a90cfbc981edad1dc8cb2fe307cae14fae5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3a7c74358eab5666dfd031f02c1a90cfbc981edad1dc8cb2fe307cae14fae5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3a7c74358eab5666dfd031f02c1a90cfbc981edad1dc8cb2fe307cae14fae5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3a7c74358eab5666dfd031f02c1a90cfbc981edad1dc8cb2fe307cae14fae5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:22 compute-0 podman[74211]: 2025-12-05 09:43:22.385967639 +0000 UTC m=+0.021548851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:22 compute-0 podman[74211]: 2025-12-05 09:43:22.489090144 +0000 UTC m=+0.124671366 container init 4f3bb359622535e6cd5e7e83027cf59bbcfae0fe58fb4633cfb91394447bc614 (image=quay.io/ceph/ceph:v19, name=hopeful_diffie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 05 09:43:22 compute-0 podman[74211]: 2025-12-05 09:43:22.494530913 +0000 UTC m=+0.130112105 container start 4f3bb359622535e6cd5e7e83027cf59bbcfae0fe58fb4633cfb91394447bc614 (image=quay.io/ceph/ceph:v19, name=hopeful_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 09:43:22 compute-0 podman[74211]: 2025-12-05 09:43:22.497628638 +0000 UTC m=+0.133209850 container attach 4f3bb359622535e6cd5e7e83027cf59bbcfae0fe58fb4633cfb91394447bc614 (image=quay.io/ceph/ceph:v19, name=hopeful_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 05 09:43:22 compute-0 ceph-mon[74060]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:43:22 compute-0 ceph-mon[74060]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/591398858' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:43:22 compute-0 systemd[1]: libpod-4f3bb359622535e6cd5e7e83027cf59bbcfae0fe58fb4633cfb91394447bc614.scope: Deactivated successfully.
Dec 05 09:43:22 compute-0 conmon[74227]: conmon 4f3bb359622535e6cd5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f3bb359622535e6cd5e7e83027cf59bbcfae0fe58fb4633cfb91394447bc614.scope/container/memory.events
Dec 05 09:43:22 compute-0 podman[74211]: 2025-12-05 09:43:22.728629894 +0000 UTC m=+0.364211086 container died 4f3bb359622535e6cd5e7e83027cf59bbcfae0fe58fb4633cfb91394447bc614 (image=quay.io/ceph/ceph:v19, name=hopeful_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 09:43:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd3a7c74358eab5666dfd031f02c1a90cfbc981edad1dc8cb2fe307cae14fae5-merged.mount: Deactivated successfully.
Dec 05 09:43:24 compute-0 ceph-mon[74060]: from='client.? 192.168.122.100:0/591398858' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:43:24 compute-0 podman[74211]: 2025-12-05 09:43:24.365904446 +0000 UTC m=+2.001485638 container remove 4f3bb359622535e6cd5e7e83027cf59bbcfae0fe58fb4633cfb91394447bc614 (image=quay.io/ceph/ceph:v19, name=hopeful_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 09:43:24 compute-0 systemd[1]: libpod-conmon-4f3bb359622535e6cd5e7e83027cf59bbcfae0fe58fb4633cfb91394447bc614.scope: Deactivated successfully.
Dec 05 09:43:24 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:43:24 compute-0 ceph-mon[74060]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 05 09:43:24 compute-0 ceph-mon[74060]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 05 09:43:24 compute-0 ceph-mon[74060]: mon.compute-0@0(leader) e1 shutdown
Dec 05 09:43:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0[74056]: 2025-12-05T09:43:24.588+0000 7f7f7c0e2640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 05 09:43:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0[74056]: 2025-12-05T09:43:24.588+0000 7f7f7c0e2640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 05 09:43:24 compute-0 ceph-mon[74060]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 05 09:43:24 compute-0 ceph-mon[74060]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 05 09:43:24 compute-0 podman[74295]: 2025-12-05 09:43:24.803249724 +0000 UTC m=+0.274565991 container died d00d8ac134224cb6adb368ef4083c0eee62a5d76951110de472951a18b85bb6e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eeb29657863c2abdb1f7132627b03dc935a45a64e61e8dc7154bb51a3761575-merged.mount: Deactivated successfully.
Dec 05 09:43:24 compute-0 podman[74295]: 2025-12-05 09:43:24.838142889 +0000 UTC m=+0.309459126 container remove d00d8ac134224cb6adb368ef4083c0eee62a5d76951110de472951a18b85bb6e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 05 09:43:24 compute-0 bash[74295]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0
Dec 05 09:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 09:43:24 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@mon.compute-0.service: Deactivated successfully.
Dec 05 09:43:24 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:43:24 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:43:25 compute-0 podman[74399]: 2025-12-05 09:43:25.168560858 +0000 UTC m=+0.046218306 container create 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac277fd2c42c50bd3a3a36570858475b11fd3858a2fa8dfe09e3240174483da9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac277fd2c42c50bd3a3a36570858475b11fd3858a2fa8dfe09e3240174483da9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac277fd2c42c50bd3a3a36570858475b11fd3858a2fa8dfe09e3240174483da9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac277fd2c42c50bd3a3a36570858475b11fd3858a2fa8dfe09e3240174483da9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:25 compute-0 podman[74399]: 2025-12-05 09:43:25.232621123 +0000 UTC m=+0.110278601 container init 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 09:43:25 compute-0 podman[74399]: 2025-12-05 09:43:25.238319819 +0000 UTC m=+0.115977267 container start 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:25 compute-0 podman[74399]: 2025-12-05 09:43:25.14852936 +0000 UTC m=+0.026186838 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:25 compute-0 ceph-mon[74418]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 09:43:25 compute-0 ceph-mon[74418]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 05 09:43:25 compute-0 ceph-mon[74418]: pidfile_write: ignore empty --pid-file
Dec 05 09:43:25 compute-0 ceph-mon[74418]: load: jerasure load: lrc 
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: RocksDB version: 7.9.2
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Git sha 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: DB SUMMARY
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: DB Session ID:  IJYRF1EZAD763P730E19
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: CURRENT file:  CURRENT
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60149 ; 
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                         Options.error_if_exists: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                       Options.create_if_missing: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                                     Options.env: 0x5585d37f6c20
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                                Options.info_log: 0x5585d4ef5ac0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                              Options.statistics: (nil)
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                               Options.use_fsync: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                              Options.db_log_dir: 
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                                 Options.wal_dir: 
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                    Options.write_buffer_manager: 0x5585d4ef9900
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                  Options.unordered_write: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                               Options.row_cache: None
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                              Options.wal_filter: None
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.two_write_queues: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.wal_compression: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.atomic_flush: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.max_background_jobs: 2
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.max_background_compactions: -1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.max_subcompactions: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.max_total_wal_size: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                          Options.max_open_files: -1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:       Options.compaction_readahead_size: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Compression algorithms supported:
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         kZSTD supported: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         kXpressCompression supported: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         kBZip2Compression supported: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         kLZ4Compression supported: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         kZlibCompression supported: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         kSnappyCompression supported: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:           Options.merge_operator: 
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:        Options.compaction_filter: None
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5585d4ef4aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5585d4f19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:        Options.write_buffer_size: 33554432
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:  Options.max_write_buffer_number: 2
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:          Options.compression: NoCompression
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.num_levels: 7
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0c84246f-bc02-4e85-8436-bed956adac07
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927805281003, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927805744699, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59756, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 144, "table_properties": {"data_size": 58231, "index_size": 167, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3238, "raw_average_key_size": 30, "raw_value_size": 55695, "raw_average_value_size": 520, "num_data_blocks": 9, "num_entries": 107, "num_filter_entries": 107, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927805, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927805744899, "job": 1, "event": "recovery_finished"}
Dec 05 09:43:25 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 05 09:43:25 compute-0 bash[74399]: 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e
Dec 05 09:43:25 compute-0 systemd[1]: Started Ceph mon.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:43:25 compute-0 podman[74440]: 2025-12-05 09:43:25.822972331 +0000 UTC m=+0.049136757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:25 compute-0 podman[74440]: 2025-12-05 09:43:25.963403217 +0000 UTC m=+0.189567613 container create bff900d34cc6ce1747e7c10dcc1dae1032a61d638cfacc115ddbd2077070fac2 (image=quay.io/ceph/ceph:v19, name=festive_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 09:43:26 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:43:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5585d4f1ae00
Dec 05 09:43:26 compute-0 ceph-mon[74418]: rocksdb: DB pointer 0x5585d5024000
Dec 05 09:43:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:43:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.7 total, 0.7 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.25 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.46              0.00         1    0.463       0      0       0.0       0.0
                                            Sum      2/0   60.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.46              0.00         1    0.463       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.46              0.00         1    0.463       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.46              0.00         1    0.463       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.7 total, 0.7 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 seconds
                                           Interval compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585d4f19350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 05 09:43:26 compute-0 ceph-mon[74418]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@-1(???) e1 preinit fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@-1(???).mds e1 new map
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-12-05T09:43:21:401410+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 05 09:43:26 compute-0 ceph-mon[74418]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:43:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 09:43:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 05 09:43:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : last_changed 2025-12-05T09:43:16.088283+0000
Dec 05 09:43:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : created 2025-12-05T09:43:16.088283+0000
Dec 05 09:43:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 05 09:43:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 05 09:43:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:43:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap 
Dec 05 09:43:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 05 09:43:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 05 09:43:26 compute-0 systemd[1]: Started libpod-conmon-bff900d34cc6ce1747e7c10dcc1dae1032a61d638cfacc115ddbd2077070fac2.scope.
Dec 05 09:43:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1388d6e9f314f89fb6894a0467bdfde7730eebe1f29f4e07c4f869c8dcaf8ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1388d6e9f314f89fb6894a0467bdfde7730eebe1f29f4e07c4f869c8dcaf8ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1388d6e9f314f89fb6894a0467bdfde7730eebe1f29f4e07c4f869c8dcaf8ba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 09:43:26 compute-0 ceph-mon[74418]: monmap epoch 1
Dec 05 09:43:26 compute-0 ceph-mon[74418]: fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:26 compute-0 ceph-mon[74418]: last_changed 2025-12-05T09:43:16.088283+0000
Dec 05 09:43:26 compute-0 ceph-mon[74418]: created 2025-12-05T09:43:16.088283+0000
Dec 05 09:43:26 compute-0 ceph-mon[74418]: min_mon_release 19 (squid)
Dec 05 09:43:26 compute-0 ceph-mon[74418]: election_strategy: 1
Dec 05 09:43:26 compute-0 ceph-mon[74418]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 05 09:43:26 compute-0 ceph-mon[74418]: fsmap 
Dec 05 09:43:26 compute-0 ceph-mon[74418]: osdmap e1: 0 total, 0 up, 0 in
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mgrmap e1: no daemons active
Dec 05 09:43:26 compute-0 podman[74440]: 2025-12-05 09:43:26.084142494 +0000 UTC m=+0.310306920 container init bff900d34cc6ce1747e7c10dcc1dae1032a61d638cfacc115ddbd2077070fac2 (image=quay.io/ceph/ceph:v19, name=festive_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 09:43:26 compute-0 podman[74440]: 2025-12-05 09:43:26.094799656 +0000 UTC m=+0.320964042 container start bff900d34cc6ce1747e7c10dcc1dae1032a61d638cfacc115ddbd2077070fac2 (image=quay.io/ceph/ceph:v19, name=festive_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:43:26 compute-0 podman[74440]: 2025-12-05 09:43:26.098833836 +0000 UTC m=+0.324998242 container attach bff900d34cc6ce1747e7c10dcc1dae1032a61d638cfacc115ddbd2077070fac2 (image=quay.io/ceph/ceph:v19, name=festive_dijkstra, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 09:43:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 05 09:43:26 compute-0 systemd[1]: libpod-bff900d34cc6ce1747e7c10dcc1dae1032a61d638cfacc115ddbd2077070fac2.scope: Deactivated successfully.
Dec 05 09:43:26 compute-0 podman[74440]: 2025-12-05 09:43:26.31112278 +0000 UTC m=+0.537287196 container died bff900d34cc6ce1747e7c10dcc1dae1032a61d638cfacc115ddbd2077070fac2 (image=quay.io/ceph/ceph:v19, name=festive_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 09:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1388d6e9f314f89fb6894a0467bdfde7730eebe1f29f4e07c4f869c8dcaf8ba-merged.mount: Deactivated successfully.
Dec 05 09:43:26 compute-0 podman[74440]: 2025-12-05 09:43:26.92713717 +0000 UTC m=+1.153301566 container remove bff900d34cc6ce1747e7c10dcc1dae1032a61d638cfacc115ddbd2077070fac2 (image=quay.io/ceph/ceph:v19, name=festive_dijkstra, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 09:43:26 compute-0 systemd[1]: libpod-conmon-bff900d34cc6ce1747e7c10dcc1dae1032a61d638cfacc115ddbd2077070fac2.scope: Deactivated successfully.
Dec 05 09:43:26 compute-0 podman[74512]: 2025-12-05 09:43:26.990437053 +0000 UTC m=+0.043905849 container create 03d1bee7bd7a44e620667435fe12d1a1613717e026ca7e552157c85b120bc06a (image=quay.io/ceph/ceph:v19, name=clever_colden, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:43:27 compute-0 systemd[1]: Started libpod-conmon-03d1bee7bd7a44e620667435fe12d1a1613717e026ca7e552157c85b120bc06a.scope.
Dec 05 09:43:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b25dd5e8bd3cc9f622da9b5bc6c270344cbd3a93c15d55666b52bbb60300c6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b25dd5e8bd3cc9f622da9b5bc6c270344cbd3a93c15d55666b52bbb60300c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b25dd5e8bd3cc9f622da9b5bc6c270344cbd3a93c15d55666b52bbb60300c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:27 compute-0 podman[74512]: 2025-12-05 09:43:26.969293191 +0000 UTC m=+0.022762007 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:27 compute-0 podman[74512]: 2025-12-05 09:43:27.073790925 +0000 UTC m=+0.127259721 container init 03d1bee7bd7a44e620667435fe12d1a1613717e026ca7e552157c85b120bc06a (image=quay.io/ceph/ceph:v19, name=clever_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 05 09:43:27 compute-0 podman[74512]: 2025-12-05 09:43:27.079888876 +0000 UTC m=+0.133357642 container start 03d1bee7bd7a44e620667435fe12d1a1613717e026ca7e552157c85b120bc06a (image=quay.io/ceph/ceph:v19, name=clever_colden, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:27 compute-0 podman[74512]: 2025-12-05 09:43:27.083780802 +0000 UTC m=+0.137249578 container attach 03d1bee7bd7a44e620667435fe12d1a1613717e026ca7e552157c85b120bc06a (image=quay.io/ceph/ceph:v19, name=clever_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 09:43:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 05 09:43:27 compute-0 systemd[1]: libpod-03d1bee7bd7a44e620667435fe12d1a1613717e026ca7e552157c85b120bc06a.scope: Deactivated successfully.
Dec 05 09:43:27 compute-0 podman[74512]: 2025-12-05 09:43:27.308121405 +0000 UTC m=+0.361590211 container died 03d1bee7bd7a44e620667435fe12d1a1613717e026ca7e552157c85b120bc06a (image=quay.io/ceph/ceph:v19, name=clever_colden, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 05 09:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-05b25dd5e8bd3cc9f622da9b5bc6c270344cbd3a93c15d55666b52bbb60300c6-merged.mount: Deactivated successfully.
Dec 05 09:43:27 compute-0 podman[74512]: 2025-12-05 09:43:27.34543607 +0000 UTC m=+0.398904846 container remove 03d1bee7bd7a44e620667435fe12d1a1613717e026ca7e552157c85b120bc06a (image=quay.io/ceph/ceph:v19, name=clever_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:43:27 compute-0 systemd[1]: libpod-conmon-03d1bee7bd7a44e620667435fe12d1a1613717e026ca7e552157c85b120bc06a.scope: Deactivated successfully.
Dec 05 09:43:27 compute-0 systemd[1]: Reloading.
Dec 05 09:43:27 compute-0 systemd-sysv-generator[74595]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:43:27 compute-0 systemd-rc-local-generator[74591]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:43:27 compute-0 systemd[1]: Reloading.
Dec 05 09:43:27 compute-0 systemd-rc-local-generator[74635]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:43:27 compute-0 systemd-sysv-generator[74640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:43:27 compute-0 systemd[1]: Starting Ceph mgr.compute-0.hvnxai for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:43:28 compute-0 podman[74692]: 2025-12-05 09:43:28.201731322 +0000 UTC m=+0.035442360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:28 compute-0 podman[74692]: 2025-12-05 09:43:28.560205372 +0000 UTC m=+0.393916390 container create 95284dae4ab8bda36351330c01104311ce5c98867ee2b97f5ffc45ab8a1d48d1 (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95618a62d73c6d20aa7b7751dd0bea0f017505db557486e8a41b0d20133bb52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95618a62d73c6d20aa7b7751dd0bea0f017505db557486e8a41b0d20133bb52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95618a62d73c6d20aa7b7751dd0bea0f017505db557486e8a41b0d20133bb52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95618a62d73c6d20aa7b7751dd0bea0f017505db557486e8a41b0d20133bb52/merged/var/lib/ceph/mgr/ceph-compute-0.hvnxai supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:29 compute-0 podman[74692]: 2025-12-05 09:43:29.237663312 +0000 UTC m=+1.071374420 container init 95284dae4ab8bda36351330c01104311ce5c98867ee2b97f5ffc45ab8a1d48d1 (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:29 compute-0 podman[74692]: 2025-12-05 09:43:29.246200277 +0000 UTC m=+1.079911325 container start 95284dae4ab8bda36351330c01104311ce5c98867ee2b97f5ffc45ab8a1d48d1 (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 09:43:29 compute-0 bash[74692]: 95284dae4ab8bda36351330c01104311ce5c98867ee2b97f5ffc45ab8a1d48d1
Dec 05 09:43:29 compute-0 systemd[1]: Started Ceph mgr.compute-0.hvnxai for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:43:29 compute-0 ceph-mgr[74711]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 09:43:29 compute-0 ceph-mgr[74711]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 05 09:43:29 compute-0 ceph-mgr[74711]: pidfile_write: ignore empty --pid-file
Dec 05 09:43:29 compute-0 podman[74712]: 2025-12-05 09:43:29.326404065 +0000 UTC m=+0.040260693 container create ca63537fe7a0e707a5b0b7d832d86ca3c7c3538d1b4bd74e625a030513d3fa37 (image=quay.io/ceph/ceph:v19, name=festive_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 09:43:29 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'alerts'
Dec 05 09:43:29 compute-0 systemd[1]: Started libpod-conmon-ca63537fe7a0e707a5b0b7d832d86ca3c7c3538d1b4bd74e625a030513d3fa37.scope.
Dec 05 09:43:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:29 compute-0 podman[74712]: 2025-12-05 09:43:29.31093147 +0000 UTC m=+0.024788128 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6f38607482fc49cb30faa9d0dbcee7c35d43bcc5bbe258e273e8b4c82abe64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6f38607482fc49cb30faa9d0dbcee7c35d43bcc5bbe258e273e8b4c82abe64/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6f38607482fc49cb30faa9d0dbcee7c35d43bcc5bbe258e273e8b4c82abe64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:29 compute-0 podman[74712]: 2025-12-05 09:43:29.427590963 +0000 UTC m=+0.141447611 container init ca63537fe7a0e707a5b0b7d832d86ca3c7c3538d1b4bd74e625a030513d3fa37 (image=quay.io/ceph/ceph:v19, name=festive_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 05 09:43:29 compute-0 podman[74712]: 2025-12-05 09:43:29.437381372 +0000 UTC m=+0.151238010 container start ca63537fe7a0e707a5b0b7d832d86ca3c7c3538d1b4bd74e625a030513d3fa37 (image=quay.io/ceph/ceph:v19, name=festive_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:43:29 compute-0 podman[74712]: 2025-12-05 09:43:29.440998465 +0000 UTC m=+0.154855093 container attach ca63537fe7a0e707a5b0b7d832d86ca3c7c3538d1b4bd74e625a030513d3fa37 (image=quay.io/ceph/ceph:v19, name=festive_benz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 05 09:43:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:29.450+0000 7f22ff081140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:43:29 compute-0 ceph-mgr[74711]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:43:29 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'balancer'
Dec 05 09:43:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:29.556+0000 7f22ff081140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:43:29 compute-0 ceph-mgr[74711]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:43:29 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'cephadm'
Dec 05 09:43:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 05 09:43:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605130966' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 09:43:29 compute-0 festive_benz[74749]: 
Dec 05 09:43:29 compute-0 festive_benz[74749]: {
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "health": {
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "status": "HEALTH_OK",
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "checks": {},
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "mutes": []
Dec 05 09:43:29 compute-0 festive_benz[74749]:     },
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "election_epoch": 5,
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "quorum": [
Dec 05 09:43:29 compute-0 festive_benz[74749]:         0
Dec 05 09:43:29 compute-0 festive_benz[74749]:     ],
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "quorum_names": [
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "compute-0"
Dec 05 09:43:29 compute-0 festive_benz[74749]:     ],
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "quorum_age": 3,
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "monmap": {
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "epoch": 1,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "min_mon_release_name": "squid",
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "num_mons": 1
Dec 05 09:43:29 compute-0 festive_benz[74749]:     },
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "osdmap": {
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "epoch": 1,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "num_osds": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "num_up_osds": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "osd_up_since": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "num_in_osds": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "osd_in_since": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "num_remapped_pgs": 0
Dec 05 09:43:29 compute-0 festive_benz[74749]:     },
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "pgmap": {
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "pgs_by_state": [],
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "num_pgs": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "num_pools": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "num_objects": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "data_bytes": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "bytes_used": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "bytes_avail": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "bytes_total": 0
Dec 05 09:43:29 compute-0 festive_benz[74749]:     },
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "fsmap": {
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "epoch": 1,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "btime": "2025-12-05T09:43:21:401410+0000",
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "by_rank": [],
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "up:standby": 0
Dec 05 09:43:29 compute-0 festive_benz[74749]:     },
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "mgrmap": {
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "available": false,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "num_standbys": 0,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "modules": [
Dec 05 09:43:29 compute-0 festive_benz[74749]:             "iostat",
Dec 05 09:43:29 compute-0 festive_benz[74749]:             "nfs",
Dec 05 09:43:29 compute-0 festive_benz[74749]:             "restful"
Dec 05 09:43:29 compute-0 festive_benz[74749]:         ],
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "services": {}
Dec 05 09:43:29 compute-0 festive_benz[74749]:     },
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "servicemap": {
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "epoch": 1,
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "modified": "2025-12-05T09:43:21.404052+0000",
Dec 05 09:43:29 compute-0 festive_benz[74749]:         "services": {}
Dec 05 09:43:29 compute-0 festive_benz[74749]:     },
Dec 05 09:43:29 compute-0 festive_benz[74749]:     "progress_events": {}
Dec 05 09:43:29 compute-0 festive_benz[74749]: }
Dec 05 09:43:29 compute-0 systemd[1]: libpod-ca63537fe7a0e707a5b0b7d832d86ca3c7c3538d1b4bd74e625a030513d3fa37.scope: Deactivated successfully.
Dec 05 09:43:29 compute-0 podman[74712]: 2025-12-05 09:43:29.661120655 +0000 UTC m=+0.374977323 container died ca63537fe7a0e707a5b0b7d832d86ca3c7c3538d1b4bd74e625a030513d3fa37 (image=quay.io/ceph/ceph:v19, name=festive_benz, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 09:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-be6f38607482fc49cb30faa9d0dbcee7c35d43bcc5bbe258e273e8b4c82abe64-merged.mount: Deactivated successfully.
Dec 05 09:43:29 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/605130966' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 09:43:29 compute-0 podman[74712]: 2025-12-05 09:43:29.697806088 +0000 UTC m=+0.411662716 container remove ca63537fe7a0e707a5b0b7d832d86ca3c7c3538d1b4bd74e625a030513d3fa37 (image=quay.io/ceph/ceph:v19, name=festive_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 09:43:29 compute-0 systemd[1]: libpod-conmon-ca63537fe7a0e707a5b0b7d832d86ca3c7c3538d1b4bd74e625a030513d3fa37.scope: Deactivated successfully.
Dec 05 09:43:30 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'crash'
Dec 05 09:43:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:30.452+0000 7f22ff081140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:43:30 compute-0 ceph-mgr[74711]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:43:30 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'dashboard'
Dec 05 09:43:31 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'devicehealth'
Dec 05 09:43:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:31.154+0000 7f22ff081140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:43:31 compute-0 ceph-mgr[74711]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:43:31 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'diskprediction_local'
Dec 05 09:43:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 05 09:43:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 05 09:43:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   from numpy import show_config as show_numpy_config
Dec 05 09:43:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:31.327+0000 7f22ff081140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:43:31 compute-0 ceph-mgr[74711]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:43:31 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'influx'
Dec 05 09:43:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:31.401+0000 7f22ff081140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:43:31 compute-0 ceph-mgr[74711]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:43:31 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'insights'
Dec 05 09:43:31 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'iostat'
Dec 05 09:43:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:31.559+0000 7f22ff081140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:43:31 compute-0 ceph-mgr[74711]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:43:31 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'k8sevents'
Dec 05 09:43:31 compute-0 podman[74798]: 2025-12-05 09:43:31.757872538 +0000 UTC m=+0.025507477 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:31 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'localpool'
Dec 05 09:43:32 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mds_autoscaler'
Dec 05 09:43:32 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mirroring'
Dec 05 09:43:32 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'nfs'
Dec 05 09:43:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:32.595+0000 7f22ff081140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:43:32 compute-0 ceph-mgr[74711]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:43:32 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'orchestrator'
Dec 05 09:43:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:32.817+0000 7f22ff081140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:43:32 compute-0 ceph-mgr[74711]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:43:32 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_perf_query'
Dec 05 09:43:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:32.899+0000 7f22ff081140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:43:32 compute-0 ceph-mgr[74711]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:43:32 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_support'
Dec 05 09:43:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:32.969+0000 7f22ff081140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:43:32 compute-0 ceph-mgr[74711]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:43:32 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'pg_autoscaler'
Dec 05 09:43:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:33.049+0000 7f22ff081140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:43:33 compute-0 ceph-mgr[74711]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:43:33 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'progress'
Dec 05 09:43:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:33.122+0000 7f22ff081140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:43:33 compute-0 ceph-mgr[74711]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:43:33 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'prometheus'
Dec 05 09:43:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:33.500+0000 7f22ff081140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:43:33 compute-0 ceph-mgr[74711]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:43:33 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rbd_support'
Dec 05 09:43:33 compute-0 podman[74798]: 2025-12-05 09:43:33.587668929 +0000 UTC m=+1.855303858 container create dd2be2d325554598e35db17fe26414e6389408320f0bac965d225b4db176e24a (image=quay.io/ceph/ceph:v19, name=quirky_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 09:43:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:33.605+0000 7f22ff081140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:43:33 compute-0 ceph-mgr[74711]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:43:33 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'restful'
Dec 05 09:43:33 compute-0 systemd[1]: Started libpod-conmon-dd2be2d325554598e35db17fe26414e6389408320f0bac965d225b4db176e24a.scope.
Dec 05 09:43:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e64d539a5022e1e3ab0b7aa343685eb90f619f4947467010d59a3a3af64a421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e64d539a5022e1e3ab0b7aa343685eb90f619f4947467010d59a3a3af64a421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e64d539a5022e1e3ab0b7aa343685eb90f619f4947467010d59a3a3af64a421/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:33 compute-0 podman[74798]: 2025-12-05 09:43:33.748986483 +0000 UTC m=+2.016621422 container init dd2be2d325554598e35db17fe26414e6389408320f0bac965d225b4db176e24a (image=quay.io/ceph/ceph:v19, name=quirky_yonath, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:33 compute-0 podman[74798]: 2025-12-05 09:43:33.754293108 +0000 UTC m=+2.021928047 container start dd2be2d325554598e35db17fe26414e6389408320f0bac965d225b4db176e24a (image=quay.io/ceph/ceph:v19, name=quirky_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 09:43:33 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rgw'
Dec 05 09:43:33 compute-0 podman[74798]: 2025-12-05 09:43:33.869666603 +0000 UTC m=+2.137301552 container attach dd2be2d325554598e35db17fe26414e6389408320f0bac965d225b4db176e24a (image=quay.io/ceph/ceph:v19, name=quirky_yonath, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:43:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 05 09:43:33 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/345906351' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 09:43:33 compute-0 quirky_yonath[74815]: 
Dec 05 09:43:33 compute-0 quirky_yonath[74815]: {
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "health": {
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "status": "HEALTH_OK",
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "checks": {},
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "mutes": []
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     },
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "election_epoch": 5,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "quorum": [
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         0
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     ],
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "quorum_names": [
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "compute-0"
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     ],
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "quorum_age": 7,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "monmap": {
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "epoch": 1,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "min_mon_release_name": "squid",
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "num_mons": 1
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     },
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "osdmap": {
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "epoch": 1,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "num_osds": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "num_up_osds": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "osd_up_since": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "num_in_osds": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "osd_in_since": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "num_remapped_pgs": 0
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     },
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "pgmap": {
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "pgs_by_state": [],
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "num_pgs": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "num_pools": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "num_objects": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "data_bytes": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "bytes_used": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "bytes_avail": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "bytes_total": 0
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     },
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "fsmap": {
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "epoch": 1,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "btime": "2025-12-05T09:43:21:401410+0000",
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "by_rank": [],
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "up:standby": 0
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     },
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "mgrmap": {
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "available": false,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "num_standbys": 0,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "modules": [
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:             "iostat",
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:             "nfs",
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:             "restful"
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         ],
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "services": {}
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     },
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "servicemap": {
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "epoch": 1,
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "modified": "2025-12-05T09:43:21.404052+0000",
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:         "services": {}
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     },
Dec 05 09:43:33 compute-0 quirky_yonath[74815]:     "progress_events": {}
Dec 05 09:43:33 compute-0 quirky_yonath[74815]: }
Dec 05 09:43:34 compute-0 systemd[1]: libpod-dd2be2d325554598e35db17fe26414e6389408320f0bac965d225b4db176e24a.scope: Deactivated successfully.
Dec 05 09:43:34 compute-0 podman[74798]: 2025-12-05 09:43:34.002894623 +0000 UTC m=+2.270529542 container died dd2be2d325554598e35db17fe26414e6389408320f0bac965d225b4db176e24a (image=quay.io/ceph/ceph:v19, name=quirky_yonath, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 09:43:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:34.044+0000 7f22ff081140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rook'
Dec 05 09:43:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:34.602+0000 7f22ff081140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'selftest'
Dec 05 09:43:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:34.679+0000 7f22ff081140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'snap_schedule'
Dec 05 09:43:34 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/345906351' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 09:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e64d539a5022e1e3ab0b7aa343685eb90f619f4947467010d59a3a3af64a421-merged.mount: Deactivated successfully.
Dec 05 09:43:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:34.762+0000 7f22ff081140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'stats'
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'status'
Dec 05 09:43:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:34.912+0000 7f22ff081140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telegraf'
Dec 05 09:43:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:34.977+0000 7f22ff081140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:43:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telemetry'
Dec 05 09:43:35 compute-0 podman[74798]: 2025-12-05 09:43:35.053134041 +0000 UTC m=+3.320768950 container remove dd2be2d325554598e35db17fe26414e6389408320f0bac965d225b4db176e24a (image=quay.io/ceph/ceph:v19, name=quirky_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 09:43:35 compute-0 systemd[1]: libpod-conmon-dd2be2d325554598e35db17fe26414e6389408320f0bac965d225b4db176e24a.scope: Deactivated successfully.
Dec 05 09:43:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:35.135+0000 7f22ff081140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:43:35 compute-0 ceph-mgr[74711]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:43:35 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'test_orchestrator'
Dec 05 09:43:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:35.361+0000 7f22ff081140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:43:35 compute-0 ceph-mgr[74711]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:43:35 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'volumes'
Dec 05 09:43:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:35.629+0000 7f22ff081140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:43:35 compute-0 ceph-mgr[74711]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:43:35 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'zabbix'
Dec 05 09:43:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:35.699+0000 7f22ff081140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:43:35 compute-0 ceph-mgr[74711]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:43:35 compute-0 ceph-mgr[74711]: ms_deliver_dispatch: unhandled message 0x5652a72389c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 05 09:43:35 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hvnxai
Dec 05 09:43:36 compute-0 ceph-mgr[74711]: mgr handle_mgr_map Activating!
Dec 05 09:43:36 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.hvnxai(active, starting, since 0.866187s)
Dec 05 09:43:36 compute-0 ceph-mgr[74711]: mgr handle_mgr_map I am now activating
Dec 05 09:43:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 05 09:43:36 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 09:43:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e1 all = 1
Dec 05 09:43:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 05 09:43:36 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 09:43:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 05 09:43:36 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 09:43:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 05 09:43:36 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:43:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"} v 0)
Dec 05 09:43:36 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"}]: dispatch
Dec 05 09:43:37 compute-0 podman[74869]: 2025-12-05 09:43:37.100358776 +0000 UTC m=+0.023465026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: balancer
Dec 05 09:43:37 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Manager daemon compute-0.hvnxai is now available
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [balancer INFO root] Starting
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: crash
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:43:37
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [balancer INFO root] No pools available
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: devicehealth
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: iostat
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Starting
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: nfs
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: orchestrator
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: pg_autoscaler
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: progress
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [progress INFO root] Loading...
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [progress INFO root] No stored events to load
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [progress INFO root] Loaded [] historic events
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [progress INFO root] Loaded OSDMap, ready.
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [rbd_support INFO root] recovery thread starting
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [rbd_support INFO root] starting setup
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: rbd_support
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: restful
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [restful INFO root] server_addr: :: server_port: 8003
Dec 05 09:43:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"} v 0)
Dec 05 09:43:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"}]: dispatch
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: status
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [restful WARNING root] server not running: no certificate configured
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: telemetry
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [rbd_support INFO root] PerfHandler: starting
Dec 05 09:43:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TaskHandler: starting
Dec 05 09:43:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"} v 0)
Dec 05 09:43:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"}]: dispatch
Dec 05 09:43:37 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: volumes
Dec 05 09:43:38 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:43:40 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:43:40 compute-0 podman[74869]: 2025-12-05 09:43:40.83374829 +0000 UTC m=+3.756854520 container create 2f6b718a5a07352c173ca61c3df369122122d6a9d108ccf2f592fe1ddd081d84 (image=quay.io/ceph/ceph:v19, name=mystifying_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 09:43:40 compute-0 ceph-mon[74418]: Activating manager daemon compute-0.hvnxai
Dec 05 09:43:40 compute-0 ceph-mon[74418]: mgrmap e2: compute-0.hvnxai(active, starting, since 0.866187s)
Dec 05 09:43:40 compute-0 ceph-mon[74418]: from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 09:43:40 compute-0 ceph-mon[74418]: from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 09:43:40 compute-0 ceph-mon[74418]: from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 09:43:40 compute-0 ceph-mon[74418]: from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:43:40 compute-0 ceph-mon[74418]: from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"}]: dispatch
Dec 05 09:43:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:40 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.hvnxai(active, since 5s)
Dec 05 09:43:40 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:43:40 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 05 09:43:40 compute-0 ceph-mgr[74711]: [rbd_support INFO root] setup complete
Dec 05 09:43:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 05 09:43:40 compute-0 systemd[1]: Started libpod-conmon-2f6b718a5a07352c173ca61c3df369122122d6a9d108ccf2f592fe1ddd081d84.scope.
Dec 05 09:43:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 05 09:43:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1c479a090815ebf377290d70c1e710df66824a00d968a0b7b510d17e836df4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1c479a090815ebf377290d70c1e710df66824a00d968a0b7b510d17e836df4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1c479a090815ebf377290d70c1e710df66824a00d968a0b7b510d17e836df4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:40 compute-0 podman[74869]: 2025-12-05 09:43:40.920172611 +0000 UTC m=+3.843278861 container init 2f6b718a5a07352c173ca61c3df369122122d6a9d108ccf2f592fe1ddd081d84 (image=quay.io/ceph/ceph:v19, name=mystifying_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:43:40 compute-0 podman[74869]: 2025-12-05 09:43:40.925148658 +0000 UTC m=+3.848254878 container start 2f6b718a5a07352c173ca61c3df369122122d6a9d108ccf2f592fe1ddd081d84 (image=quay.io/ceph/ceph:v19, name=mystifying_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Dec 05 09:43:40 compute-0 podman[74869]: 2025-12-05 09:43:40.92875821 +0000 UTC m=+3.851864430 container attach 2f6b718a5a07352c173ca61c3df369122122d6a9d108ccf2f592fe1ddd081d84 (image=quay.io/ceph/ceph:v19, name=mystifying_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:43:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 05 09:43:41 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2943721699' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]: 
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]: {
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "health": {
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "status": "HEALTH_OK",
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "checks": {},
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "mutes": []
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     },
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "election_epoch": 5,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "quorum": [
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         0
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     ],
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "quorum_names": [
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "compute-0"
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     ],
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "quorum_age": 15,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "monmap": {
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "epoch": 1,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "min_mon_release_name": "squid",
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "num_mons": 1
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     },
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "osdmap": {
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "epoch": 1,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "num_osds": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "num_up_osds": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "osd_up_since": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "num_in_osds": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "osd_in_since": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "num_remapped_pgs": 0
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     },
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "pgmap": {
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "pgs_by_state": [],
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "num_pgs": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "num_pools": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "num_objects": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "data_bytes": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "bytes_used": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "bytes_avail": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "bytes_total": 0
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     },
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "fsmap": {
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "epoch": 1,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "btime": "2025-12-05T09:43:21:401410+0000",
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "by_rank": [],
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "up:standby": 0
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     },
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "mgrmap": {
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "available": true,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "num_standbys": 0,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "modules": [
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:             "iostat",
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:             "nfs",
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:             "restful"
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         ],
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "services": {}
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     },
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "servicemap": {
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "epoch": 1,
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "modified": "2025-12-05T09:43:21.404052+0000",
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:         "services": {}
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     },
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]:     "progress_events": {}
Dec 05 09:43:41 compute-0 mystifying_faraday[74952]: }
Dec 05 09:43:41 compute-0 systemd[1]: libpod-2f6b718a5a07352c173ca61c3df369122122d6a9d108ccf2f592fe1ddd081d84.scope: Deactivated successfully.
Dec 05 09:43:41 compute-0 podman[74869]: 2025-12-05 09:43:41.371193892 +0000 UTC m=+4.294300122 container died 2f6b718a5a07352c173ca61c3df369122122d6a9d108ccf2f592fe1ddd081d84 (image=quay.io/ceph/ceph:v19, name=mystifying_faraday, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 09:43:42 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f1c479a090815ebf377290d70c1e710df66824a00d968a0b7b510d17e836df4-merged.mount: Deactivated successfully.
Dec 05 09:43:42 compute-0 ceph-mon[74418]: Manager daemon compute-0.hvnxai is now available
Dec 05 09:43:42 compute-0 ceph-mon[74418]: from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"}]: dispatch
Dec 05 09:43:42 compute-0 ceph-mon[74418]: from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"}]: dispatch
Dec 05 09:43:42 compute-0 ceph-mon[74418]: from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:42 compute-0 ceph-mon[74418]: mgrmap e3: compute-0.hvnxai(active, since 5s)
Dec 05 09:43:42 compute-0 ceph-mon[74418]: from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:42 compute-0 ceph-mon[74418]: from='mgr.14102 192.168.122.100:0/457514433' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2943721699' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 09:43:42 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.hvnxai(active, since 6s)
Dec 05 09:43:42 compute-0 podman[74869]: 2025-12-05 09:43:42.624302985 +0000 UTC m=+5.547409215 container remove 2f6b718a5a07352c173ca61c3df369122122d6a9d108ccf2f592fe1ddd081d84 (image=quay.io/ceph/ceph:v19, name=mystifying_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Dec 05 09:43:42 compute-0 systemd[1]: libpod-conmon-2f6b718a5a07352c173ca61c3df369122122d6a9d108ccf2f592fe1ddd081d84.scope: Deactivated successfully.
Dec 05 09:43:42 compute-0 podman[74990]: 2025-12-05 09:43:42.663181632 +0000 UTC m=+0.019633896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:42 compute-0 podman[74990]: 2025-12-05 09:43:42.89450598 +0000 UTC m=+0.250958254 container create 0e147f0241895e6b9a5d76869eddf6e00f82b1f754d81aca1d1474efd0b92730 (image=quay.io/ceph/ceph:v19, name=angry_perlman, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:43:42 compute-0 systemd[1]: Started libpod-conmon-0e147f0241895e6b9a5d76869eddf6e00f82b1f754d81aca1d1474efd0b92730.scope.
Dec 05 09:43:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db76664b9e52a0f0e546f7a1eb85e910cd187cc4722256d32c528051ec80840/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db76664b9e52a0f0e546f7a1eb85e910cd187cc4722256d32c528051ec80840/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db76664b9e52a0f0e546f7a1eb85e910cd187cc4722256d32c528051ec80840/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db76664b9e52a0f0e546f7a1eb85e910cd187cc4722256d32c528051ec80840/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:43 compute-0 podman[74990]: 2025-12-05 09:43:43.161494284 +0000 UTC m=+0.517946528 container init 0e147f0241895e6b9a5d76869eddf6e00f82b1f754d81aca1d1474efd0b92730 (image=quay.io/ceph/ceph:v19, name=angry_perlman, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 09:43:43 compute-0 podman[74990]: 2025-12-05 09:43:43.167693914 +0000 UTC m=+0.524146158 container start 0e147f0241895e6b9a5d76869eddf6e00f82b1f754d81aca1d1474efd0b92730 (image=quay.io/ceph/ceph:v19, name=angry_perlman, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 09:43:43 compute-0 podman[74990]: 2025-12-05 09:43:43.172704434 +0000 UTC m=+0.529156678 container attach 0e147f0241895e6b9a5d76869eddf6e00f82b1f754d81aca1d1474efd0b92730 (image=quay.io/ceph/ceph:v19, name=angry_perlman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:43:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 05 09:43:43 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3569545036' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 09:43:43 compute-0 angry_perlman[75006]: 
Dec 05 09:43:43 compute-0 angry_perlman[75006]: [global]
Dec 05 09:43:43 compute-0 angry_perlman[75006]:         fsid = 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:43:43 compute-0 angry_perlman[75006]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 05 09:43:43 compute-0 systemd[1]: libpod-0e147f0241895e6b9a5d76869eddf6e00f82b1f754d81aca1d1474efd0b92730.scope: Deactivated successfully.
Dec 05 09:43:43 compute-0 podman[74990]: 2025-12-05 09:43:43.514724915 +0000 UTC m=+0.871177179 container died 0e147f0241895e6b9a5d76869eddf6e00f82b1f754d81aca1d1474efd0b92730 (image=quay.io/ceph/ceph:v19, name=angry_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 09:43:44 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:43:45 compute-0 ceph-mon[74418]: mgrmap e4: compute-0.hvnxai(active, since 6s)
Dec 05 09:43:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3569545036' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 09:43:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7db76664b9e52a0f0e546f7a1eb85e910cd187cc4722256d32c528051ec80840-merged.mount: Deactivated successfully.
Dec 05 09:43:46 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:43:48 compute-0 podman[74990]: 2025-12-05 09:43:48.256419235 +0000 UTC m=+5.612871479 container remove 0e147f0241895e6b9a5d76869eddf6e00f82b1f754d81aca1d1474efd0b92730 (image=quay.io/ceph/ceph:v19, name=angry_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Dec 05 09:43:48 compute-0 systemd[1]: libpod-conmon-0e147f0241895e6b9a5d76869eddf6e00f82b1f754d81aca1d1474efd0b92730.scope: Deactivated successfully.
Dec 05 09:43:48 compute-0 podman[75045]: 2025-12-05 09:43:48.319520672 +0000 UTC m=+0.036721557 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:48 compute-0 podman[75045]: 2025-12-05 09:43:48.461700942 +0000 UTC m=+0.178901737 container create 1ddbf64515c26ed2d2cdb81dd1a7f42ed6f0db6ec814f805be462b931b7357f2 (image=quay.io/ceph/ceph:v19, name=focused_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 05 09:43:48 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:43:48 compute-0 systemd[1]: Started libpod-conmon-1ddbf64515c26ed2d2cdb81dd1a7f42ed6f0db6ec814f805be462b931b7357f2.scope.
Dec 05 09:43:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1ac2c1703cc1a1757c42de76b3f2556dc8ba71d7852e01bac4286567aa43a7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1ac2c1703cc1a1757c42de76b3f2556dc8ba71d7852e01bac4286567aa43a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1ac2c1703cc1a1757c42de76b3f2556dc8ba71d7852e01bac4286567aa43a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:48 compute-0 podman[75045]: 2025-12-05 09:43:48.666011727 +0000 UTC m=+0.383212542 container init 1ddbf64515c26ed2d2cdb81dd1a7f42ed6f0db6ec814f805be462b931b7357f2 (image=quay.io/ceph/ceph:v19, name=focused_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:48 compute-0 podman[75045]: 2025-12-05 09:43:48.670915677 +0000 UTC m=+0.388116472 container start 1ddbf64515c26ed2d2cdb81dd1a7f42ed6f0db6ec814f805be462b931b7357f2 (image=quay.io/ceph/ceph:v19, name=focused_dhawan, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:43:48 compute-0 podman[75045]: 2025-12-05 09:43:48.677303403 +0000 UTC m=+0.394504228 container attach 1ddbf64515c26ed2d2cdb81dd1a7f42ed6f0db6ec814f805be462b931b7357f2 (image=quay.io/ceph/ceph:v19, name=focused_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:43:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 05 09:43:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2800236191' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 05 09:43:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2800236191' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 05 09:43:49 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.hvnxai(active, since 13s)
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  1: '-n'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  2: 'mgr.compute-0.hvnxai'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  3: '-f'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  4: '--setuser'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  5: 'ceph'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  6: '--setgroup'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  7: 'ceph'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  8: '--default-log-to-file=false'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  9: '--default-log-to-journald=true'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr respawn  exe_path /proc/self/exe
Dec 05 09:43:49 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2800236191' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 05 09:43:49 compute-0 systemd[1]: libpod-1ddbf64515c26ed2d2cdb81dd1a7f42ed6f0db6ec814f805be462b931b7357f2.scope: Deactivated successfully.
Dec 05 09:43:49 compute-0 podman[75045]: 2025-12-05 09:43:49.357035513 +0000 UTC m=+1.074236298 container died 1ddbf64515c26ed2d2cdb81dd1a7f42ed6f0db6ec814f805be462b931b7357f2 (image=quay.io/ceph/ceph:v19, name=focused_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb1ac2c1703cc1a1757c42de76b3f2556dc8ba71d7852e01bac4286567aa43a7-merged.mount: Deactivated successfully.
Dec 05 09:43:49 compute-0 podman[75045]: 2025-12-05 09:43:49.396342466 +0000 UTC m=+1.113543261 container remove 1ddbf64515c26ed2d2cdb81dd1a7f42ed6f0db6ec814f805be462b931b7357f2 (image=quay.io/ceph/ceph:v19, name=focused_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 09:43:49 compute-0 systemd[1]: libpod-conmon-1ddbf64515c26ed2d2cdb81dd1a7f42ed6f0db6ec814f805be462b931b7357f2.scope: Deactivated successfully.
Dec 05 09:43:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ignoring --setuser ceph since I am not root
Dec 05 09:43:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ignoring --setgroup ceph since I am not root
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: pidfile_write: ignore empty --pid-file
Dec 05 09:43:49 compute-0 podman[75101]: 2025-12-05 09:43:49.458392784 +0000 UTC m=+0.045758044 container create 42d413226b18b33a290d77c3ee218640269f2aae0d1516e03bcec7f7d0ef0f1c (image=quay.io/ceph/ceph:v19, name=eloquent_babbage, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'alerts'
Dec 05 09:43:49 compute-0 systemd[1]: Started libpod-conmon-42d413226b18b33a290d77c3ee218640269f2aae0d1516e03bcec7f7d0ef0f1c.scope.
Dec 05 09:43:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a78581c04dfbb8f57bfe8221454107c4acb0109dc18fe24992383eb0000657/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a78581c04dfbb8f57bfe8221454107c4acb0109dc18fe24992383eb0000657/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a78581c04dfbb8f57bfe8221454107c4acb0109dc18fe24992383eb0000657/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:49 compute-0 podman[75101]: 2025-12-05 09:43:49.434365402 +0000 UTC m=+0.021730682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:49 compute-0 podman[75101]: 2025-12-05 09:43:49.536726276 +0000 UTC m=+0.124091556 container init 42d413226b18b33a290d77c3ee218640269f2aae0d1516e03bcec7f7d0ef0f1c (image=quay.io/ceph/ceph:v19, name=eloquent_babbage, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 09:43:49 compute-0 podman[75101]: 2025-12-05 09:43:49.542303803 +0000 UTC m=+0.129669063 container start 42d413226b18b33a290d77c3ee218640269f2aae0d1516e03bcec7f7d0ef0f1c (image=quay.io/ceph/ceph:v19, name=eloquent_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 05 09:43:49 compute-0 podman[75101]: 2025-12-05 09:43:49.545162622 +0000 UTC m=+0.132527902 container attach 42d413226b18b33a290d77c3ee218640269f2aae0d1516e03bcec7f7d0ef0f1c (image=quay.io/ceph/ceph:v19, name=eloquent_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'balancer'
Dec 05 09:43:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:49.585+0000 7f9ebba9b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:43:49 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'cephadm'
Dec 05 09:43:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:49.670+0000 7f9ebba9b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:43:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 05 09:43:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890776290' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 09:43:49 compute-0 eloquent_babbage[75137]: {
Dec 05 09:43:49 compute-0 eloquent_babbage[75137]:     "epoch": 5,
Dec 05 09:43:49 compute-0 eloquent_babbage[75137]:     "available": true,
Dec 05 09:43:49 compute-0 eloquent_babbage[75137]:     "active_name": "compute-0.hvnxai",
Dec 05 09:43:49 compute-0 eloquent_babbage[75137]:     "num_standby": 0
Dec 05 09:43:49 compute-0 eloquent_babbage[75137]: }
Dec 05 09:43:49 compute-0 systemd[1]: libpod-42d413226b18b33a290d77c3ee218640269f2aae0d1516e03bcec7f7d0ef0f1c.scope: Deactivated successfully.
Dec 05 09:43:49 compute-0 podman[75101]: 2025-12-05 09:43:49.995082973 +0000 UTC m=+0.582448233 container died 42d413226b18b33a290d77c3ee218640269f2aae0d1516e03bcec7f7d0ef0f1c (image=quay.io/ceph/ceph:v19, name=eloquent_babbage, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6a78581c04dfbb8f57bfe8221454107c4acb0109dc18fe24992383eb0000657-merged.mount: Deactivated successfully.
Dec 05 09:43:50 compute-0 podman[75101]: 2025-12-05 09:43:50.058141205 +0000 UTC m=+0.645506495 container remove 42d413226b18b33a290d77c3ee218640269f2aae0d1516e03bcec7f7d0ef0f1c (image=quay.io/ceph/ceph:v19, name=eloquent_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 09:43:50 compute-0 systemd[1]: libpod-conmon-42d413226b18b33a290d77c3ee218640269f2aae0d1516e03bcec7f7d0ef0f1c.scope: Deactivated successfully.
Dec 05 09:43:50 compute-0 podman[75178]: 2025-12-05 09:43:50.131836618 +0000 UTC m=+0.046943093 container create 18cbcd1ebb7a9479e23f182679cb937586e20904f278bb3450ed7d7a36d1c2b8 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:50 compute-0 systemd[1]: Started libpod-conmon-18cbcd1ebb7a9479e23f182679cb937586e20904f278bb3450ed7d7a36d1c2b8.scope.
Dec 05 09:43:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9947498a732d54f6d29ec8d5a938aeb5649ccddb81ae61fad9ef7d457bbc0ee0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9947498a732d54f6d29ec8d5a938aeb5649ccddb81ae61fad9ef7d457bbc0ee0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9947498a732d54f6d29ec8d5a938aeb5649ccddb81ae61fad9ef7d457bbc0ee0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:50 compute-0 podman[75178]: 2025-12-05 09:43:50.112055911 +0000 UTC m=+0.027162396 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:50 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'crash'
Dec 05 09:43:50 compute-0 ceph-mgr[74711]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:43:50 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'dashboard'
Dec 05 09:43:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:50.529+0000 7f9ebba9b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:43:50 compute-0 podman[75178]: 2025-12-05 09:43:50.692754806 +0000 UTC m=+0.607861301 container init 18cbcd1ebb7a9479e23f182679cb937586e20904f278bb3450ed7d7a36d1c2b8 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:43:50 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2800236191' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 05 09:43:50 compute-0 ceph-mon[74418]: mgrmap e5: compute-0.hvnxai(active, since 13s)
Dec 05 09:43:50 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/890776290' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 09:43:50 compute-0 podman[75178]: 2025-12-05 09:43:50.698825314 +0000 UTC m=+0.613931770 container start 18cbcd1ebb7a9479e23f182679cb937586e20904f278bb3450ed7d7a36d1c2b8 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:43:50 compute-0 podman[75178]: 2025-12-05 09:43:50.702564688 +0000 UTC m=+0.617671153 container attach 18cbcd1ebb7a9479e23f182679cb937586e20904f278bb3450ed7d7a36d1c2b8 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:43:51 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'devicehealth'
Dec 05 09:43:51 compute-0 ceph-mgr[74711]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:43:51 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'diskprediction_local'
Dec 05 09:43:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:51.212+0000 7f9ebba9b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:43:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 05 09:43:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 05 09:43:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   from numpy import show_config as show_numpy_config
Dec 05 09:43:51 compute-0 ceph-mgr[74711]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:43:51 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'influx'
Dec 05 09:43:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:51.392+0000 7f9ebba9b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:43:51 compute-0 ceph-mgr[74711]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:43:51 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'insights'
Dec 05 09:43:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:51.465+0000 7f9ebba9b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:43:51 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'iostat'
Dec 05 09:43:51 compute-0 ceph-mgr[74711]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:43:51 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'k8sevents'
Dec 05 09:43:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:51.611+0000 7f9ebba9b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:43:52 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'localpool'
Dec 05 09:43:52 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mds_autoscaler'
Dec 05 09:43:52 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mirroring'
Dec 05 09:43:52 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'nfs'
Dec 05 09:43:52 compute-0 ceph-mgr[74711]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:43:52 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'orchestrator'
Dec 05 09:43:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:52.660+0000 7f9ebba9b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:43:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:52.892+0000 7f9ebba9b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:43:52 compute-0 ceph-mgr[74711]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:43:52 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_perf_query'
Dec 05 09:43:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:52.971+0000 7f9ebba9b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:43:52 compute-0 ceph-mgr[74711]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:43:52 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_support'
Dec 05 09:43:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:53.036+0000 7f9ebba9b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:43:53 compute-0 ceph-mgr[74711]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:43:53 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'pg_autoscaler'
Dec 05 09:43:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:53.114+0000 7f9ebba9b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:43:53 compute-0 ceph-mgr[74711]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:43:53 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'progress'
Dec 05 09:43:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:53.209+0000 7f9ebba9b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:43:53 compute-0 ceph-mgr[74711]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:43:53 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'prometheus'
Dec 05 09:43:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:53.577+0000 7f9ebba9b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:43:53 compute-0 ceph-mgr[74711]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:43:53 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rbd_support'
Dec 05 09:43:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:53.677+0000 7f9ebba9b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:43:53 compute-0 ceph-mgr[74711]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:43:53 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'restful'
Dec 05 09:43:53 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rgw'
Dec 05 09:43:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:54.149+0000 7f9ebba9b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:43:54 compute-0 ceph-mgr[74711]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:43:54 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rook'
Dec 05 09:43:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:54.735+0000 7f9ebba9b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:43:54 compute-0 ceph-mgr[74711]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:43:54 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'selftest'
Dec 05 09:43:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:54.809+0000 7f9ebba9b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:43:54 compute-0 ceph-mgr[74711]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:43:54 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'snap_schedule'
Dec 05 09:43:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:54.886+0000 7f9ebba9b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:43:54 compute-0 ceph-mgr[74711]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:43:54 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'stats'
Dec 05 09:43:54 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'status'
Dec 05 09:43:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:55.032+0000 7f9ebba9b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telegraf'
Dec 05 09:43:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:55.101+0000 7f9ebba9b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telemetry'
Dec 05 09:43:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:55.266+0000 7f9ebba9b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'test_orchestrator'
Dec 05 09:43:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:55.488+0000 7f9ebba9b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'volumes'
Dec 05 09:43:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:55.777+0000 7f9ebba9b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'zabbix'
Dec 05 09:43:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:43:55.843+0000 7f9ebba9b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Active manager daemon compute-0.hvnxai restarted
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hvnxai
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: ms_deliver_dispatch: unhandled message 0x560e88216d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr handle_mgr_map Activating!
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr handle_mgr_map I am now activating
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.hvnxai(active, starting, since 0.0392043s)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"} v 0)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e1 all = 1
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Manager daemon compute-0.hvnxai is now available
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: balancer
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [balancer INFO root] Starting
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:43:55
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [balancer INFO root] No pools available
Dec 05 09:43:55 compute-0 ceph-mon[74418]: Active manager daemon compute-0.hvnxai restarted
Dec 05 09:43:55 compute-0 ceph-mon[74418]: Activating manager daemon compute-0.hvnxai
Dec 05 09:43:55 compute-0 ceph-mon[74418]: osdmap e2: 0 total, 0 up, 0 in
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mgrmap e6: compute-0.hvnxai(active, starting, since 0.0392043s)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mon[74418]: Manager daemon compute-0.hvnxai is now available
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: cephadm
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: crash
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: devicehealth
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Starting
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: iostat
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: nfs
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: orchestrator
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: pg_autoscaler
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: progress
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [progress INFO root] Loading...
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [progress INFO root] No stored events to load
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [progress INFO root] Loaded [] historic events
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [progress INFO root] Loaded OSDMap, ready.
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] recovery thread starting
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] starting setup
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: rbd_support
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: restful
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [restful INFO root] server_addr: :: server_port: 8003
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: status
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"} v 0)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [restful WARNING root] server not running: no certificate configured
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: telemetry
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] PerfHandler: starting
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TaskHandler: starting
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"} v 0)
Dec 05 09:43:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"}]: dispatch
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 05 09:43:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] setup complete
Dec 05 09:43:56 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: volumes
Dec 05 09:43:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019922458 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:43:56 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 05 09:43:56 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.hvnxai(active, since 1.04588s)
Dec 05 09:43:56 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 05 09:43:56 compute-0 laughing_lederberg[75202]: {
Dec 05 09:43:56 compute-0 laughing_lederberg[75202]:     "mgrmap_epoch": 7,
Dec 05 09:43:56 compute-0 laughing_lederberg[75202]:     "initialized": true
Dec 05 09:43:56 compute-0 laughing_lederberg[75202]: }
Dec 05 09:43:56 compute-0 podman[75178]: 2025-12-05 09:43:56.930642102 +0000 UTC m=+6.845748547 container died 18cbcd1ebb7a9479e23f182679cb937586e20904f278bb3450ed7d7a36d1c2b8 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:43:56 compute-0 systemd[1]: libpod-18cbcd1ebb7a9479e23f182679cb937586e20904f278bb3450ed7d7a36d1c2b8.scope: Deactivated successfully.
Dec 05 09:43:56 compute-0 ceph-mon[74418]: Found migration_current of "None". Setting to last migration.
Dec 05 09:43:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:43:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:43:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"}]: dispatch
Dec 05 09:43:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"}]: dispatch
Dec 05 09:43:56 compute-0 ceph-mon[74418]: mgrmap e7: compute-0.hvnxai(active, since 1.04588s)
Dec 05 09:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9947498a732d54f6d29ec8d5a938aeb5649ccddb81ae61fad9ef7d457bbc0ee0-merged.mount: Deactivated successfully.
Dec 05 09:43:56 compute-0 podman[75178]: 2025-12-05 09:43:56.971749865 +0000 UTC m=+6.886856320 container remove 18cbcd1ebb7a9479e23f182679cb937586e20904f278bb3450ed7d7a36d1c2b8 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:56 compute-0 systemd[1]: libpod-conmon-18cbcd1ebb7a9479e23f182679cb937586e20904f278bb3450ed7d7a36d1c2b8.scope: Deactivated successfully.
Dec 05 09:43:57 compute-0 podman[75352]: 2025-12-05 09:43:57.048574481 +0000 UTC m=+0.049908122 container create 0be2cad41aad5c55461dcb5c0291fe9ff1c78d788e05298840bcfba653af3dc6 (image=quay.io/ceph/ceph:v19, name=elated_driscoll, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 05 09:43:57 compute-0 systemd[1]: Started libpod-conmon-0be2cad41aad5c55461dcb5c0291fe9ff1c78d788e05298840bcfba653af3dc6.scope.
Dec 05 09:43:57 compute-0 podman[75352]: 2025-12-05 09:43:57.027035606 +0000 UTC m=+0.028369287 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7011147b266ed2ad74125ab9b438d8fff3f20902cf0616ea3b72ae35804197f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7011147b266ed2ad74125ab9b438d8fff3f20902cf0616ea3b72ae35804197f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7011147b266ed2ad74125ab9b438d8fff3f20902cf0616ea3b72ae35804197f1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:57 compute-0 podman[75352]: 2025-12-05 09:43:57.171497278 +0000 UTC m=+0.172831029 container init 0be2cad41aad5c55461dcb5c0291fe9ff1c78d788e05298840bcfba653af3dc6 (image=quay.io/ceph/ceph:v19, name=elated_driscoll, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 09:43:57 compute-0 podman[75352]: 2025-12-05 09:43:57.179296541 +0000 UTC m=+0.180630182 container start 0be2cad41aad5c55461dcb5c0291fe9ff1c78d788e05298840bcfba653af3dc6 (image=quay.io/ceph/ceph:v19, name=elated_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:43:57 compute-0 podman[75352]: 2025-12-05 09:43:57.183092919 +0000 UTC m=+0.184426760 container attach 0be2cad41aad5c55461dcb5c0291fe9ff1c78d788e05298840bcfba653af3dc6 (image=quay.io/ceph/ceph:v19, name=elated_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:43:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Dec 05 09:43:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Dec 05 09:43:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:43:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 05 09:43:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 05 09:43:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:43:57 compute-0 systemd[1]: libpod-0be2cad41aad5c55461dcb5c0291fe9ff1c78d788e05298840bcfba653af3dc6.scope: Deactivated successfully.
Dec 05 09:43:57 compute-0 podman[75352]: 2025-12-05 09:43:57.579040487 +0000 UTC m=+0.580374138 container died 0be2cad41aad5c55461dcb5c0291fe9ff1c78d788e05298840bcfba653af3dc6 (image=quay.io/ceph/ceph:v19, name=elated_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-7011147b266ed2ad74125ab9b438d8fff3f20902cf0616ea3b72ae35804197f1-merged.mount: Deactivated successfully.
Dec 05 09:43:57 compute-0 podman[75352]: 2025-12-05 09:43:57.642040455 +0000 UTC m=+0.643374096 container remove 0be2cad41aad5c55461dcb5c0291fe9ff1c78d788e05298840bcfba653af3dc6 (image=quay.io/ceph/ceph:v19, name=elated_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 09:43:57 compute-0 systemd[1]: libpod-conmon-0be2cad41aad5c55461dcb5c0291fe9ff1c78d788e05298840bcfba653af3dc6.scope: Deactivated successfully.
Dec 05 09:43:57 compute-0 podman[75405]: 2025-12-05 09:43:57.711693189 +0000 UTC m=+0.043853984 container create 0af452c8afbff09845b46445e1c64ebbdfee52765e9bf07161ca77d77a377e72 (image=quay.io/ceph/ceph:v19, name=crazy_sanderson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Dec 05 09:43:57 compute-0 systemd[1]: Started libpod-conmon-0af452c8afbff09845b46445e1c64ebbdfee52765e9bf07161ca77d77a377e72.scope.
Dec 05 09:43:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/884171e56cbcf9e83b59d98c5865dfd12f3544ca1f4cec9c3cd1afd6d450d344/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/884171e56cbcf9e83b59d98c5865dfd12f3544ca1f4cec9c3cd1afd6d450d344/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:57 compute-0 podman[75405]: 2025-12-05 09:43:57.695903757 +0000 UTC m=+0.028064592 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/884171e56cbcf9e83b59d98c5865dfd12f3544ca1f4cec9c3cd1afd6d450d344/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:57 compute-0 podman[75405]: 2025-12-05 09:43:57.801904257 +0000 UTC m=+0.134065132 container init 0af452c8afbff09845b46445e1c64ebbdfee52765e9bf07161ca77d77a377e72 (image=quay.io/ceph/ceph:v19, name=crazy_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:43:57 compute-0 podman[75405]: 2025-12-05 09:43:57.807265266 +0000 UTC m=+0.139426061 container start 0af452c8afbff09845b46445e1c64ebbdfee52765e9bf07161ca77d77a377e72 (image=quay.io/ceph/ceph:v19, name=crazy_sanderson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:43:57 compute-0 podman[75405]: 2025-12-05 09:43:57.810364516 +0000 UTC m=+0.142525411 container attach 0af452c8afbff09845b46445e1c64ebbdfee52765e9bf07161ca77d77a377e72 (image=quay.io/ceph/ceph:v19, name=crazy_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 09:43:57 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:43:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 05 09:43:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: [cephadm INFO root] Set ssh ssh_user
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 05 09:43:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 05 09:43:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: [cephadm INFO root] Set ssh ssh_config
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 05 09:43:58 compute-0 crazy_sanderson[75422]: ssh user set to ceph-admin. sudo will be used
Dec 05 09:43:58 compute-0 systemd[1]: libpod-0af452c8afbff09845b46445e1c64ebbdfee52765e9bf07161ca77d77a377e72.scope: Deactivated successfully.
Dec 05 09:43:58 compute-0 podman[75405]: 2025-12-05 09:43:58.190912014 +0000 UTC m=+0.523072849 container died 0af452c8afbff09845b46445e1c64ebbdfee52765e9bf07161ca77d77a377e72 (image=quay.io/ceph/ceph:v19, name=crazy_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 09:43:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-884171e56cbcf9e83b59d98c5865dfd12f3544ca1f4cec9c3cd1afd6d450d344-merged.mount: Deactivated successfully.
Dec 05 09:43:58 compute-0 podman[75405]: 2025-12-05 09:43:58.225757653 +0000 UTC m=+0.557918448 container remove 0af452c8afbff09845b46445e1c64ebbdfee52765e9bf07161ca77d77a377e72 (image=quay.io/ceph/ceph:v19, name=crazy_sanderson, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:58 compute-0 systemd[1]: libpod-conmon-0af452c8afbff09845b46445e1c64ebbdfee52765e9bf07161ca77d77a377e72.scope: Deactivated successfully.
Dec 05 09:43:58 compute-0 podman[75461]: 2025-12-05 09:43:58.293565153 +0000 UTC m=+0.050250971 container create 4a96323776526e0062b7b79ba3b327f6bce047f2f2c83e33044b6f5e2deeb644 (image=quay.io/ceph/ceph:v19, name=modest_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:58 compute-0 systemd[1]: Started libpod-conmon-4a96323776526e0062b7b79ba3b327f6bce047f2f2c83e33044b6f5e2deeb644.scope.
Dec 05 09:43:58 compute-0 podman[75461]: 2025-12-05 09:43:58.261723176 +0000 UTC m=+0.018409014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6a350e914ea5faa0c36cc3698e5338092a029e0a17cf2d9c198ac0d18b444f/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6a350e914ea5faa0c36cc3698e5338092a029e0a17cf2d9c198ac0d18b444f/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6a350e914ea5faa0c36cc3698e5338092a029e0a17cf2d9c198ac0d18b444f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6a350e914ea5faa0c36cc3698e5338092a029e0a17cf2d9c198ac0d18b444f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6a350e914ea5faa0c36cc3698e5338092a029e0a17cf2d9c198ac0d18b444f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:58 compute-0 podman[75461]: 2025-12-05 09:43:58.401994907 +0000 UTC m=+0.158680755 container init 4a96323776526e0062b7b79ba3b327f6bce047f2f2c83e33044b6f5e2deeb644 (image=quay.io/ceph/ceph:v19, name=modest_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:43:58 compute-0 podman[75461]: 2025-12-05 09:43:58.409414418 +0000 UTC m=+0.166100246 container start 4a96323776526e0062b7b79ba3b327f6bce047f2f2c83e33044b6f5e2deeb644 (image=quay.io/ceph/ceph:v19, name=modest_feynman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 09:43:58 compute-0 podman[75461]: 2025-12-05 09:43:58.41409624 +0000 UTC m=+0.170782058 container attach 4a96323776526e0062b7b79ba3b327f6bce047f2f2c83e33044b6f5e2deeb644 (image=quay.io/ceph/ceph:v19, name=modest_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:58 compute-0 ceph-mon[74418]: from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 05 09:43:58 compute-0 ceph-mon[74418]: from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 05 09:43:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:43:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:58 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.hvnxai(active, since 2s)
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:43:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:43:58] ENGINE Bus STARTING
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:43:58] ENGINE Bus STARTING
Dec 05 09:43:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: [cephadm INFO root] Set ssh private key
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:43:58] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:43:58] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:43:58] ENGINE Client ('192.168.122.100', 59774) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:43:58] ENGINE Client ('192.168.122.100', 59774) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:43:58 compute-0 systemd[1]: libpod-4a96323776526e0062b7b79ba3b327f6bce047f2f2c83e33044b6f5e2deeb644.scope: Deactivated successfully.
Dec 05 09:43:58 compute-0 podman[75528]: 2025-12-05 09:43:58.945535288 +0000 UTC m=+0.033441362 container died 4a96323776526e0062b7b79ba3b327f6bce047f2f2c83e33044b6f5e2deeb644 (image=quay.io/ceph/ceph:v19, name=modest_feynman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:43:58] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:43:58] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:43:58] ENGINE Bus STARTED
Dec 05 09:43:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:43:58] ENGINE Bus STARTED
Dec 05 09:43:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 05 09:43:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d6a350e914ea5faa0c36cc3698e5338092a029e0a17cf2d9c198ac0d18b444f-merged.mount: Deactivated successfully.
Dec 05 09:43:59 compute-0 podman[75528]: 2025-12-05 09:43:59.0726888 +0000 UTC m=+0.160594834 container remove 4a96323776526e0062b7b79ba3b327f6bce047f2f2c83e33044b6f5e2deeb644 (image=quay.io/ceph/ceph:v19, name=modest_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:43:59 compute-0 systemd[1]: libpod-conmon-4a96323776526e0062b7b79ba3b327f6bce047f2f2c83e33044b6f5e2deeb644.scope: Deactivated successfully.
Dec 05 09:43:59 compute-0 podman[75543]: 2025-12-05 09:43:59.161993981 +0000 UTC m=+0.053536876 container create f1aa9047123e72c13acdbaae9eb05534fc22a68ffeed9ffeb7d5656eb0fb63d7 (image=quay.io/ceph/ceph:v19, name=laughing_proskuriakova, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:43:59 compute-0 systemd[1]: Started libpod-conmon-f1aa9047123e72c13acdbaae9eb05534fc22a68ffeed9ffeb7d5656eb0fb63d7.scope.
Dec 05 09:43:59 compute-0 podman[75543]: 2025-12-05 09:43:59.135914536 +0000 UTC m=+0.027457451 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1731b7540d8cf50563dd3e3fe7e615dd8a637ba4e03f410eefd0a746dc0538/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1731b7540d8cf50563dd3e3fe7e615dd8a637ba4e03f410eefd0a746dc0538/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1731b7540d8cf50563dd3e3fe7e615dd8a637ba4e03f410eefd0a746dc0538/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1731b7540d8cf50563dd3e3fe7e615dd8a637ba4e03f410eefd0a746dc0538/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1731b7540d8cf50563dd3e3fe7e615dd8a637ba4e03f410eefd0a746dc0538/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:59 compute-0 podman[75543]: 2025-12-05 09:43:59.259961078 +0000 UTC m=+0.151504033 container init f1aa9047123e72c13acdbaae9eb05534fc22a68ffeed9ffeb7d5656eb0fb63d7 (image=quay.io/ceph/ceph:v19, name=laughing_proskuriakova, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 09:43:59 compute-0 podman[75543]: 2025-12-05 09:43:59.268640745 +0000 UTC m=+0.160183660 container start f1aa9047123e72c13acdbaae9eb05534fc22a68ffeed9ffeb7d5656eb0fb63d7 (image=quay.io/ceph/ceph:v19, name=laughing_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 09:43:59 compute-0 podman[75543]: 2025-12-05 09:43:59.273107159 +0000 UTC m=+0.164650114 container attach f1aa9047123e72c13acdbaae9eb05534fc22a68ffeed9ffeb7d5656eb0fb63d7 (image=quay.io/ceph/ceph:v19, name=laughing_proskuriakova, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 09:43:59 compute-0 ceph-mon[74418]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:43:59 compute-0 ceph-mon[74418]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:43:59 compute-0 ceph-mon[74418]: Set ssh ssh_user
Dec 05 09:43:59 compute-0 ceph-mon[74418]: Set ssh ssh_config
Dec 05 09:43:59 compute-0 ceph-mon[74418]: ssh user set to ceph-admin. sudo will be used
Dec 05 09:43:59 compute-0 ceph-mon[74418]: mgrmap e8: compute-0.hvnxai(active, since 2s)
Dec 05 09:43:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:43:59 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:43:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 05 09:43:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:43:59 compute-0 ceph-mgr[74711]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 05 09:43:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 05 09:43:59 compute-0 systemd[1]: libpod-f1aa9047123e72c13acdbaae9eb05534fc22a68ffeed9ffeb7d5656eb0fb63d7.scope: Deactivated successfully.
Dec 05 09:43:59 compute-0 podman[75543]: 2025-12-05 09:43:59.706045336 +0000 UTC m=+0.597588261 container died f1aa9047123e72c13acdbaae9eb05534fc22a68ffeed9ffeb7d5656eb0fb63d7 (image=quay.io/ceph/ceph:v19, name=laughing_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 09:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa1731b7540d8cf50563dd3e3fe7e615dd8a637ba4e03f410eefd0a746dc0538-merged.mount: Deactivated successfully.
Dec 05 09:43:59 compute-0 podman[75543]: 2025-12-05 09:43:59.740687828 +0000 UTC m=+0.632230733 container remove f1aa9047123e72c13acdbaae9eb05534fc22a68ffeed9ffeb7d5656eb0fb63d7 (image=quay.io/ceph/ceph:v19, name=laughing_proskuriakova, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 09:43:59 compute-0 systemd[1]: libpod-conmon-f1aa9047123e72c13acdbaae9eb05534fc22a68ffeed9ffeb7d5656eb0fb63d7.scope: Deactivated successfully.
Dec 05 09:43:59 compute-0 podman[75597]: 2025-12-05 09:43:59.819368839 +0000 UTC m=+0.051371775 container create 1b84cdc1c140e7852f4063d4ffb54611e042914138e9f6366e6cd24ea731d88f (image=quay.io/ceph/ceph:v19, name=cool_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:43:59 compute-0 systemd[1]: Started libpod-conmon-1b84cdc1c140e7852f4063d4ffb54611e042914138e9f6366e6cd24ea731d88f.scope.
Dec 05 09:43:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774c0e81bc738d542944d93f769d84c32bdd6c4c76534dcb3f470591c2410384/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774c0e81bc738d542944d93f769d84c32bdd6c4c76534dcb3f470591c2410384/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774c0e81bc738d542944d93f769d84c32bdd6c4c76534dcb3f470591c2410384/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:43:59 compute-0 podman[75597]: 2025-12-05 09:43:59.890965996 +0000 UTC m=+0.122968972 container init 1b84cdc1c140e7852f4063d4ffb54611e042914138e9f6366e6cd24ea731d88f (image=quay.io/ceph/ceph:v19, name=cool_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 05 09:43:59 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:43:59 compute-0 podman[75597]: 2025-12-05 09:43:59.896521112 +0000 UTC m=+0.128524058 container start 1b84cdc1c140e7852f4063d4ffb54611e042914138e9f6366e6cd24ea731d88f (image=quay.io/ceph/ceph:v19, name=cool_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 09:43:59 compute-0 podman[75597]: 2025-12-05 09:43:59.802899259 +0000 UTC m=+0.034902225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:43:59 compute-0 podman[75597]: 2025-12-05 09:43:59.900455652 +0000 UTC m=+0.132458628 container attach 1b84cdc1c140e7852f4063d4ffb54611e042914138e9f6366e6cd24ea731d88f (image=quay.io/ceph/ceph:v19, name=cool_heisenberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 09:44:00 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:00 compute-0 cool_heisenberg[75614]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCX2L/gfnx1bUdS7E+aTmifa3mxwpZOqFmqgJu3Lk8O4P5WGeKoJdTFU9SBDYIxMjCdk0fAK9ONoy32CQCZbqnY1M23ayXuVN1dAVu5jhE+SmHPw6qnTS23HsvO8LJBbnJ40NI+zyr43zRxYnr3WyL5gh3RxYlBD7qWx6tyC+nJngw2l308LP7eOOZ8qxSH+LUIgDRCFx8l0KnQo2rrZ060q/y/M5oAr0vY20wWCci2p/IFHxF2YzyuMS9cUqf1nDypkLZHqb4u/QMs/tmq5vaqWgBzc8jeJGSUyHwmfY726mPZPFmXhG3IFHQfUVWIC7/NUSMvWByuwZoCX+OG8G2vbnFNM1sFRjLO2toRoq3a7rHZHl/84vUNISg3xFVYATnmZjJXJcqi2gTw/VxEMTxom4vrR3Id5QaROC3QMUo4YODpHrQKx48oJ6i7fLRIiqzp2HMHkP39bzCkyhyiXXtMcLuHIObsVsY5A4uxBf0whhD6BZVqW9VXtLZLJzSZ3Vk= zuul@controller
Dec 05 09:44:00 compute-0 systemd[1]: libpod-1b84cdc1c140e7852f4063d4ffb54611e042914138e9f6366e6cd24ea731d88f.scope: Deactivated successfully.
Dec 05 09:44:00 compute-0 conmon[75614]: conmon 1b84cdc1c140e7852f40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b84cdc1c140e7852f4063d4ffb54611e042914138e9f6366e6cd24ea731d88f.scope/container/memory.events
Dec 05 09:44:00 compute-0 podman[75597]: 2025-12-05 09:44:00.258698282 +0000 UTC m=+0.490701318 container died 1b84cdc1c140e7852f4063d4ffb54611e042914138e9f6366e6cd24ea731d88f (image=quay.io/ceph/ceph:v19, name=cool_heisenberg, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-774c0e81bc738d542944d93f769d84c32bdd6c4c76534dcb3f470591c2410384-merged.mount: Deactivated successfully.
Dec 05 09:44:00 compute-0 podman[75597]: 2025-12-05 09:44:00.302163862 +0000 UTC m=+0.534166818 container remove 1b84cdc1c140e7852f4063d4ffb54611e042914138e9f6366e6cd24ea731d88f (image=quay.io/ceph/ceph:v19, name=cool_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 09:44:00 compute-0 systemd[1]: libpod-conmon-1b84cdc1c140e7852f4063d4ffb54611e042914138e9f6366e6cd24ea731d88f.scope: Deactivated successfully.
Dec 05 09:44:00 compute-0 podman[75651]: 2025-12-05 09:44:00.374690538 +0000 UTC m=+0.047471388 container create 65dbb9b1eb2150db11aca95b4ed844ae3ce58b057cd846d7c03709cc69fe1dae (image=quay.io/ceph/ceph:v19, name=silly_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 05 09:44:00 compute-0 systemd[1]: Started libpod-conmon-65dbb9b1eb2150db11aca95b4ed844ae3ce58b057cd846d7c03709cc69fe1dae.scope.
Dec 05 09:44:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/427e722d75e09dc03849c0961211a0b95c679f233c5777fe8682c3e02138e522/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/427e722d75e09dc03849c0961211a0b95c679f233c5777fe8682c3e02138e522/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/427e722d75e09dc03849c0961211a0b95c679f233c5777fe8682c3e02138e522/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:00 compute-0 podman[75651]: 2025-12-05 09:44:00.351471643 +0000 UTC m=+0.024252293 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:00 compute-0 ceph-mon[74418]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:00 compute-0 ceph-mon[74418]: [05/Dec/2025:09:43:58] ENGINE Bus STARTING
Dec 05 09:44:00 compute-0 ceph-mon[74418]: Set ssh ssh_identity_key
Dec 05 09:44:00 compute-0 ceph-mon[74418]: Set ssh private key
Dec 05 09:44:00 compute-0 ceph-mon[74418]: [05/Dec/2025:09:43:58] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:44:00 compute-0 ceph-mon[74418]: [05/Dec/2025:09:43:58] ENGINE Client ('192.168.122.100', 59774) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:44:00 compute-0 ceph-mon[74418]: [05/Dec/2025:09:43:58] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:44:00 compute-0 ceph-mon[74418]: [05/Dec/2025:09:43:58] ENGINE Bus STARTED
Dec 05 09:44:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:00 compute-0 podman[75651]: 2025-12-05 09:44:00.448704868 +0000 UTC m=+0.121485518 container init 65dbb9b1eb2150db11aca95b4ed844ae3ce58b057cd846d7c03709cc69fe1dae (image=quay.io/ceph/ceph:v19, name=silly_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:44:00 compute-0 podman[75651]: 2025-12-05 09:44:00.453845819 +0000 UTC m=+0.126626449 container start 65dbb9b1eb2150db11aca95b4ed844ae3ce58b057cd846d7c03709cc69fe1dae (image=quay.io/ceph/ceph:v19, name=silly_zhukovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 05 09:44:00 compute-0 podman[75651]: 2025-12-05 09:44:00.456827578 +0000 UTC m=+0.129608208 container attach 65dbb9b1eb2150db11aca95b4ed844ae3ce58b057cd846d7c03709cc69fe1dae (image=quay.io/ceph/ceph:v19, name=silly_zhukovsky, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:44:00 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:01 compute-0 sshd-session[75693]: Accepted publickey for ceph-admin from 192.168.122.100 port 44230 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:01 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 05 09:44:01 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 05 09:44:01 compute-0 systemd-logind[789]: New session 21 of user ceph-admin.
Dec 05 09:44:01 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 05 09:44:01 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 05 09:44:01 compute-0 systemd[75697]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:01 compute-0 systemd[75697]: Queued start job for default target Main User Target.
Dec 05 09:44:01 compute-0 sshd-session[75710]: Accepted publickey for ceph-admin from 192.168.122.100 port 44232 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:01 compute-0 systemd[75697]: Created slice User Application Slice.
Dec 05 09:44:01 compute-0 systemd[75697]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:44:01 compute-0 systemd[75697]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 09:44:01 compute-0 systemd[75697]: Reached target Paths.
Dec 05 09:44:01 compute-0 systemd[75697]: Reached target Timers.
Dec 05 09:44:01 compute-0 systemd[75697]: Starting D-Bus User Message Bus Socket...
Dec 05 09:44:01 compute-0 systemd-logind[789]: New session 23 of user ceph-admin.
Dec 05 09:44:01 compute-0 systemd[75697]: Starting Create User's Volatile Files and Directories...
Dec 05 09:44:01 compute-0 systemd[75697]: Finished Create User's Volatile Files and Directories.
Dec 05 09:44:01 compute-0 systemd[75697]: Listening on D-Bus User Message Bus Socket.
Dec 05 09:44:01 compute-0 systemd[75697]: Reached target Sockets.
Dec 05 09:44:01 compute-0 systemd[75697]: Reached target Basic System.
Dec 05 09:44:01 compute-0 systemd[75697]: Reached target Main User Target.
Dec 05 09:44:01 compute-0 systemd[75697]: Startup finished in 128ms.
Dec 05 09:44:01 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 05 09:44:01 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Dec 05 09:44:01 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Dec 05 09:44:01 compute-0 sshd-session[75693]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:01 compute-0 sshd-session[75710]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:01 compute-0 sudo[75718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:01 compute-0 sudo[75718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:01 compute-0 sudo[75718]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:01 compute-0 ceph-mon[74418]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:01 compute-0 ceph-mon[74418]: Set ssh ssh_identity_pub
Dec 05 09:44:01 compute-0 ceph-mon[74418]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053013 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:01 compute-0 sshd-session[75743]: Accepted publickey for ceph-admin from 192.168.122.100 port 44248 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:01 compute-0 systemd-logind[789]: New session 24 of user ceph-admin.
Dec 05 09:44:01 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Dec 05 09:44:01 compute-0 sshd-session[75743]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:01 compute-0 sudo[75747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 05 09:44:01 compute-0 sudo[75747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:01 compute-0 sudo[75747]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:01 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:44:01 compute-0 sshd-session[75772]: Accepted publickey for ceph-admin from 192.168.122.100 port 44258 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:01 compute-0 systemd-logind[789]: New session 25 of user ceph-admin.
Dec 05 09:44:01 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 05 09:44:01 compute-0 sshd-session[75772]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:02 compute-0 sudo[75776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 05 09:44:02 compute-0 sudo[75776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:02 compute-0 sudo[75776]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:02 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 05 09:44:02 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 05 09:44:02 compute-0 sshd-session[75801]: Accepted publickey for ceph-admin from 192.168.122.100 port 44268 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:02 compute-0 systemd-logind[789]: New session 26 of user ceph-admin.
Dec 05 09:44:02 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 05 09:44:02 compute-0 sshd-session[75801]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:02 compute-0 sudo[75805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:44:02 compute-0 sudo[75805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:02 compute-0 sudo[75805]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:02 compute-0 ceph-mon[74418]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:02 compute-0 sshd-session[75830]: Accepted publickey for ceph-admin from 192.168.122.100 port 44270 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:02 compute-0 systemd-logind[789]: New session 27 of user ceph-admin.
Dec 05 09:44:02 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 05 09:44:02 compute-0 sshd-session[75830]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:02 compute-0 sudo[75834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:44:02 compute-0 sudo[75834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:02 compute-0 sudo[75834]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:02 compute-0 sshd-session[75859]: Accepted publickey for ceph-admin from 192.168.122.100 port 55672 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:02 compute-0 systemd-logind[789]: New session 28 of user ceph-admin.
Dec 05 09:44:02 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 05 09:44:02 compute-0 sshd-session[75859]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:03 compute-0 sudo[75863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 05 09:44:03 compute-0 sudo[75863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:03 compute-0 sudo[75863]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:03 compute-0 sshd-session[75888]: Accepted publickey for ceph-admin from 192.168.122.100 port 55682 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:03 compute-0 systemd-logind[789]: New session 29 of user ceph-admin.
Dec 05 09:44:03 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 05 09:44:03 compute-0 sshd-session[75888]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:03 compute-0 sudo[75892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:44:03 compute-0 sudo[75892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:03 compute-0 sudo[75892]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:03 compute-0 ceph-mon[74418]: Deploying cephadm binary to compute-0
Dec 05 09:44:03 compute-0 sshd-session[75917]: Accepted publickey for ceph-admin from 192.168.122.100 port 55694 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:03 compute-0 systemd-logind[789]: New session 30 of user ceph-admin.
Dec 05 09:44:03 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 05 09:44:03 compute-0 sshd-session[75917]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:03 compute-0 sudo[75921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 05 09:44:03 compute-0 sudo[75921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:03 compute-0 sudo[75921]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:03 compute-0 sshd-session[75946]: Accepted publickey for ceph-admin from 192.168.122.100 port 55708 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:03 compute-0 systemd-logind[789]: New session 31 of user ceph-admin.
Dec 05 09:44:03 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 05 09:44:03 compute-0 sshd-session[75946]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:03 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:44:04 compute-0 sshd-session[75973]: Accepted publickey for ceph-admin from 192.168.122.100 port 55722 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:04 compute-0 systemd-logind[789]: New session 32 of user ceph-admin.
Dec 05 09:44:04 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 05 09:44:04 compute-0 sshd-session[75973]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:05 compute-0 sudo[75977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 05 09:44:05 compute-0 sudo[75977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:05 compute-0 sudo[75977]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:05 compute-0 sshd-session[76002]: Accepted publickey for ceph-admin from 192.168.122.100 port 55732 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:44:05 compute-0 systemd-logind[789]: New session 33 of user ceph-admin.
Dec 05 09:44:05 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 05 09:44:05 compute-0 sshd-session[76002]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:44:05 compute-0 sudo[76006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 05 09:44:05 compute-0 sudo[76006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:05 compute-0 sudo[76006]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:44:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:05 compute-0 ceph-mgr[74711]: [cephadm INFO root] Added host compute-0
Dec 05 09:44:05 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 05 09:44:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 05 09:44:05 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:44:05 compute-0 silly_zhukovsky[75667]: Added host 'compute-0' with addr '192.168.122.100'
Dec 05 09:44:05 compute-0 systemd[1]: libpod-65dbb9b1eb2150db11aca95b4ed844ae3ce58b057cd846d7c03709cc69fe1dae.scope: Deactivated successfully.
Dec 05 09:44:05 compute-0 podman[75651]: 2025-12-05 09:44:05.801635444 +0000 UTC m=+5.474416074 container died 65dbb9b1eb2150db11aca95b4ed844ae3ce58b057cd846d7c03709cc69fe1dae (image=quay.io/ceph/ceph:v19, name=silly_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-427e722d75e09dc03849c0961211a0b95c679f233c5777fe8682c3e02138e522-merged.mount: Deactivated successfully.
Dec 05 09:44:05 compute-0 sudo[76051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:05 compute-0 podman[75651]: 2025-12-05 09:44:05.842592526 +0000 UTC m=+5.515373166 container remove 65dbb9b1eb2150db11aca95b4ed844ae3ce58b057cd846d7c03709cc69fe1dae (image=quay.io/ceph/ceph:v19, name=silly_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Dec 05 09:44:05 compute-0 sudo[76051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:05 compute-0 sudo[76051]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:05 compute-0 systemd[1]: libpod-conmon-65dbb9b1eb2150db11aca95b4ed844ae3ce58b057cd846d7c03709cc69fe1dae.scope: Deactivated successfully.
Dec 05 09:44:05 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:44:05 compute-0 sudo[76089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Dec 05 09:44:05 compute-0 sudo[76089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:05 compute-0 podman[76088]: 2025-12-05 09:44:05.91142121 +0000 UTC m=+0.043992305 container create 8fe1967af982d9eba7940db18825cc64a41d0f09d886446a3ecedb04060bd4a4 (image=quay.io/ceph/ceph:v19, name=focused_carver, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:44:05 compute-0 systemd[1]: Started libpod-conmon-8fe1967af982d9eba7940db18825cc64a41d0f09d886446a3ecedb04060bd4a4.scope.
Dec 05 09:44:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ade58dae266dd3f130fcad5484570fa24b47b7ddf0d2b8ca79910c487e64fab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ade58dae266dd3f130fcad5484570fa24b47b7ddf0d2b8ca79910c487e64fab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ade58dae266dd3f130fcad5484570fa24b47b7ddf0d2b8ca79910c487e64fab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:05 compute-0 podman[76088]: 2025-12-05 09:44:05.981360886 +0000 UTC m=+0.113932031 container init 8fe1967af982d9eba7940db18825cc64a41d0f09d886446a3ecedb04060bd4a4 (image=quay.io/ceph/ceph:v19, name=focused_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 05 09:44:05 compute-0 podman[76088]: 2025-12-05 09:44:05.989102778 +0000 UTC m=+0.121673873 container start 8fe1967af982d9eba7940db18825cc64a41d0f09d886446a3ecedb04060bd4a4 (image=quay.io/ceph/ceph:v19, name=focused_carver, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:05 compute-0 podman[76088]: 2025-12-05 09:44:05.894517308 +0000 UTC m=+0.027088413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:05 compute-0 podman[76088]: 2025-12-05 09:44:05.994094685 +0000 UTC m=+0.126665780 container attach 8fe1967af982d9eba7940db18825cc64a41d0f09d886446a3ecedb04060bd4a4 (image=quay.io/ceph/ceph:v19, name=focused_carver, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 05 09:44:06 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:06 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 05 09:44:06 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 05 09:44:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 05 09:44:06 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:06 compute-0 focused_carver[76129]: Scheduled mon update...
Dec 05 09:44:06 compute-0 systemd[1]: libpod-8fe1967af982d9eba7940db18825cc64a41d0f09d886446a3ecedb04060bd4a4.scope: Deactivated successfully.
Dec 05 09:44:06 compute-0 podman[76088]: 2025-12-05 09:44:06.440961734 +0000 UTC m=+0.573532849 container died 8fe1967af982d9eba7940db18825cc64a41d0f09d886446a3ecedb04060bd4a4 (image=quay.io/ceph/ceph:v19, name=focused_carver, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ade58dae266dd3f130fcad5484570fa24b47b7ddf0d2b8ca79910c487e64fab-merged.mount: Deactivated successfully.
Dec 05 09:44:06 compute-0 podman[76088]: 2025-12-05 09:44:06.569643958 +0000 UTC m=+0.702215053 container remove 8fe1967af982d9eba7940db18825cc64a41d0f09d886446a3ecedb04060bd4a4 (image=quay.io/ceph/ceph:v19, name=focused_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:06 compute-0 systemd[1]: libpod-conmon-8fe1967af982d9eba7940db18825cc64a41d0f09d886446a3ecedb04060bd4a4.scope: Deactivated successfully.
Dec 05 09:44:06 compute-0 podman[76192]: 2025-12-05 09:44:06.652318543 +0000 UTC m=+0.056899350 container create e82bbbcf533156e6456a2f5995895cefc1650e42b8d8387a5f1656a7fc1861c6 (image=quay.io/ceph/ceph:v19, name=laughing_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:44:06 compute-0 systemd[1]: Started libpod-conmon-e82bbbcf533156e6456a2f5995895cefc1650e42b8d8387a5f1656a7fc1861c6.scope.
Dec 05 09:44:06 compute-0 podman[76163]: 2025-12-05 09:44:06.695794033 +0000 UTC m=+0.562615759 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c31db9612a274c61b76e125972f120061aa230b15b5fa9f166f4e1248b5de91/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c31db9612a274c61b76e125972f120061aa230b15b5fa9f166f4e1248b5de91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c31db9612a274c61b76e125972f120061aa230b15b5fa9f166f4e1248b5de91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:06 compute-0 podman[76192]: 2025-12-05 09:44:06.633331092 +0000 UTC m=+0.037911909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:06 compute-0 podman[76192]: 2025-12-05 09:44:06.740777455 +0000 UTC m=+0.145358262 container init e82bbbcf533156e6456a2f5995895cefc1650e42b8d8387a5f1656a7fc1861c6 (image=quay.io/ceph/ceph:v19, name=laughing_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:06 compute-0 podman[76192]: 2025-12-05 09:44:06.746605114 +0000 UTC m=+0.151185901 container start e82bbbcf533156e6456a2f5995895cefc1650e42b8d8387a5f1656a7fc1861c6 (image=quay.io/ceph/ceph:v19, name=laughing_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 05 09:44:07 compute-0 podman[76192]: 2025-12-05 09:44:07.51941596 +0000 UTC m=+0.923996827 container attach e82bbbcf533156e6456a2f5995895cefc1650e42b8d8387a5f1656a7fc1861c6 (image=quay.io/ceph/ceph:v19, name=laughing_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 09:44:07 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:07 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 05 09:44:07 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 05 09:44:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 05 09:44:07 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:07 compute-0 ceph-mon[74418]: Added host compute-0
Dec 05 09:44:07 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:44:07 compute-0 ceph-mon[74418]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:07 compute-0 ceph-mon[74418]: Saving service mon spec with placement count:5
Dec 05 09:44:07 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:07 compute-0 podman[76228]: 2025-12-05 09:44:07.611817471 +0000 UTC m=+0.857670791 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:07 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:44:08 compute-0 podman[76228]: 2025-12-05 09:44:08.068484248 +0000 UTC m=+1.314337538 container create a05ab1616a27a315bcce4c2c66ef4257956897d24895b834ef5aa2a5d8b85f05 (image=quay.io/ceph/ceph:v19, name=practical_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:08 compute-0 laughing_chebyshev[76210]: Scheduled mgr update...
Dec 05 09:44:08 compute-0 systemd[1]: Started libpod-conmon-a05ab1616a27a315bcce4c2c66ef4257956897d24895b834ef5aa2a5d8b85f05.scope.
Dec 05 09:44:08 compute-0 systemd[1]: libpod-e82bbbcf533156e6456a2f5995895cefc1650e42b8d8387a5f1656a7fc1861c6.scope: Deactivated successfully.
Dec 05 09:44:08 compute-0 podman[76192]: 2025-12-05 09:44:08.266526772 +0000 UTC m=+1.671107569 container died e82bbbcf533156e6456a2f5995895cefc1650e42b8d8387a5f1656a7fc1861c6 (image=quay.io/ceph/ceph:v19, name=laughing_chebyshev, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 05 09:44:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c31db9612a274c61b76e125972f120061aa230b15b5fa9f166f4e1248b5de91-merged.mount: Deactivated successfully.
Dec 05 09:44:08 compute-0 podman[76228]: 2025-12-05 09:44:08.305812048 +0000 UTC m=+1.551665358 container init a05ab1616a27a315bcce4c2c66ef4257956897d24895b834ef5aa2a5d8b85f05 (image=quay.io/ceph/ceph:v19, name=practical_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:08 compute-0 podman[76192]: 2025-12-05 09:44:08.309317454 +0000 UTC m=+1.713898251 container remove e82bbbcf533156e6456a2f5995895cefc1650e42b8d8387a5f1656a7fc1861c6 (image=quay.io/ceph/ceph:v19, name=laughing_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:44:08 compute-0 podman[76228]: 2025-12-05 09:44:08.312069749 +0000 UTC m=+1.557923039 container start a05ab1616a27a315bcce4c2c66ef4257956897d24895b834ef5aa2a5d8b85f05 (image=quay.io/ceph/ceph:v19, name=practical_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:08 compute-0 podman[76228]: 2025-12-05 09:44:08.315346839 +0000 UTC m=+1.561200159 container attach a05ab1616a27a315bcce4c2c66ef4257956897d24895b834ef5aa2a5d8b85f05 (image=quay.io/ceph/ceph:v19, name=practical_kowalevski, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 09:44:08 compute-0 systemd[1]: libpod-conmon-e82bbbcf533156e6456a2f5995895cefc1650e42b8d8387a5f1656a7fc1861c6.scope: Deactivated successfully.
Dec 05 09:44:08 compute-0 podman[76283]: 2025-12-05 09:44:08.40590136 +0000 UTC m=+0.079465858 container create 8597cc9bf3dd3fc1573fe33bece367c4ca002ea24b0ff99a96d766c46eef03d2 (image=quay.io/ceph/ceph:v19, name=peaceful_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:44:08 compute-0 systemd[1]: Started libpod-conmon-8597cc9bf3dd3fc1573fe33bece367c4ca002ea24b0ff99a96d766c46eef03d2.scope.
Dec 05 09:44:08 compute-0 practical_kowalevski[76266]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec 05 09:44:08 compute-0 systemd[1]: libpod-a05ab1616a27a315bcce4c2c66ef4257956897d24895b834ef5aa2a5d8b85f05.scope: Deactivated successfully.
Dec 05 09:44:08 compute-0 podman[76228]: 2025-12-05 09:44:08.450904132 +0000 UTC m=+1.696757422 container died a05ab1616a27a315bcce4c2c66ef4257956897d24895b834ef5aa2a5d8b85f05 (image=quay.io/ceph/ceph:v19, name=practical_kowalevski, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad534fbf8cbc3ba2bc043ca4aba2979fe6e05e900292997222175c75b04f460/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad534fbf8cbc3ba2bc043ca4aba2979fe6e05e900292997222175c75b04f460/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad534fbf8cbc3ba2bc043ca4aba2979fe6e05e900292997222175c75b04f460/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef1a7ae54dd9eb8866aed64e5aacc939552ade749d730449b6616aed0d5a23e0-merged.mount: Deactivated successfully.
Dec 05 09:44:08 compute-0 podman[76283]: 2025-12-05 09:44:08.390172479 +0000 UTC m=+0.063737007 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:08 compute-0 podman[76283]: 2025-12-05 09:44:08.486933149 +0000 UTC m=+0.160497677 container init 8597cc9bf3dd3fc1573fe33bece367c4ca002ea24b0ff99a96d766c46eef03d2 (image=quay.io/ceph/ceph:v19, name=peaceful_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:08 compute-0 podman[76283]: 2025-12-05 09:44:08.492660886 +0000 UTC m=+0.166225384 container start 8597cc9bf3dd3fc1573fe33bece367c4ca002ea24b0ff99a96d766c46eef03d2 (image=quay.io/ceph/ceph:v19, name=peaceful_aryabhata, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:44:08 compute-0 podman[76283]: 2025-12-05 09:44:08.514124283 +0000 UTC m=+0.187688801 container attach 8597cc9bf3dd3fc1573fe33bece367c4ca002ea24b0ff99a96d766c46eef03d2 (image=quay.io/ceph/ceph:v19, name=peaceful_aryabhata, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:44:08 compute-0 podman[76228]: 2025-12-05 09:44:08.524927689 +0000 UTC m=+1.770780979 container remove a05ab1616a27a315bcce4c2c66ef4257956897d24895b834ef5aa2a5d8b85f05 (image=quay.io/ceph/ceph:v19, name=practical_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:44:08 compute-0 systemd[1]: libpod-conmon-a05ab1616a27a315bcce4c2c66ef4257956897d24895b834ef5aa2a5d8b85f05.scope: Deactivated successfully.
Dec 05 09:44:08 compute-0 sudo[76089]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 05 09:44:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:08 compute-0 sudo[76314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:08 compute-0 sudo[76314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:08 compute-0 sudo[76314]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:08 compute-0 sudo[76358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 05 09:44:08 compute-0 sudo[76358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:08 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:08 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service crash spec with placement *
Dec 05 09:44:08 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 05 09:44:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 05 09:44:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:08 compute-0 peaceful_aryabhata[76299]: Scheduled crash update...
Dec 05 09:44:08 compute-0 systemd[1]: libpod-8597cc9bf3dd3fc1573fe33bece367c4ca002ea24b0ff99a96d766c46eef03d2.scope: Deactivated successfully.
Dec 05 09:44:08 compute-0 podman[76283]: 2025-12-05 09:44:08.939671318 +0000 UTC m=+0.613235826 container died 8597cc9bf3dd3fc1573fe33bece367c4ca002ea24b0ff99a96d766c46eef03d2 (image=quay.io/ceph/ceph:v19, name=peaceful_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dad534fbf8cbc3ba2bc043ca4aba2979fe6e05e900292997222175c75b04f460-merged.mount: Deactivated successfully.
Dec 05 09:44:08 compute-0 podman[76283]: 2025-12-05 09:44:08.977792863 +0000 UTC m=+0.651357361 container remove 8597cc9bf3dd3fc1573fe33bece367c4ca002ea24b0ff99a96d766c46eef03d2 (image=quay.io/ceph/ceph:v19, name=peaceful_aryabhata, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 09:44:08 compute-0 systemd[1]: libpod-conmon-8597cc9bf3dd3fc1573fe33bece367c4ca002ea24b0ff99a96d766c46eef03d2.scope: Deactivated successfully.
Dec 05 09:44:08 compute-0 sudo[76358]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:44:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:09 compute-0 podman[76419]: 2025-12-05 09:44:09.040644324 +0000 UTC m=+0.044320905 container create 131097411b7180a85a6703811e10bdae7e35846cffc322194cc69bdcc0d3920a (image=quay.io/ceph/ceph:v19, name=interesting_stonebraker, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 09:44:09 compute-0 systemd[1]: Started libpod-conmon-131097411b7180a85a6703811e10bdae7e35846cffc322194cc69bdcc0d3920a.scope.
Dec 05 09:44:09 compute-0 sudo[76431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:09 compute-0 sudo[76431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:09 compute-0 sudo[76431]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68ca9528438d094ad6793e77c0829ac5f5c987747ef4b124ae0c26e70a9e45/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68ca9528438d094ad6793e77c0829ac5f5c987747ef4b124ae0c26e70a9e45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68ca9528438d094ad6793e77c0829ac5f5c987747ef4b124ae0c26e70a9e45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:09 compute-0 podman[76419]: 2025-12-05 09:44:09.106428566 +0000 UTC m=+0.110105167 container init 131097411b7180a85a6703811e10bdae7e35846cffc322194cc69bdcc0d3920a (image=quay.io/ceph/ceph:v19, name=interesting_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 09:44:09 compute-0 podman[76419]: 2025-12-05 09:44:09.017087319 +0000 UTC m=+0.020763960 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:09 compute-0 podman[76419]: 2025-12-05 09:44:09.1138634 +0000 UTC m=+0.117539991 container start 131097411b7180a85a6703811e10bdae7e35846cffc322194cc69bdcc0d3920a (image=quay.io/ceph/ceph:v19, name=interesting_stonebraker, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 09:44:09 compute-0 podman[76419]: 2025-12-05 09:44:09.117568981 +0000 UTC m=+0.121245592 container attach 131097411b7180a85a6703811e10bdae7e35846cffc322194cc69bdcc0d3920a (image=quay.io/ceph/ceph:v19, name=interesting_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:44:09 compute-0 sudo[76464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 09:44:09 compute-0 sudo[76464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:09 compute-0 ceph-mon[74418]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:09 compute-0 ceph-mon[74418]: Saving service mgr spec with placement count:2
Dec 05 09:44:09 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:09 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:09 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:09 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 05 09:44:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/100530455' entity='client.admin' 
Dec 05 09:44:09 compute-0 podman[76419]: 2025-12-05 09:44:09.505261569 +0000 UTC m=+0.508938170 container died 131097411b7180a85a6703811e10bdae7e35846cffc322194cc69bdcc0d3920a (image=quay.io/ceph/ceph:v19, name=interesting_stonebraker, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:44:09 compute-0 systemd[1]: libpod-131097411b7180a85a6703811e10bdae7e35846cffc322194cc69bdcc0d3920a.scope: Deactivated successfully.
Dec 05 09:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b68ca9528438d094ad6793e77c0829ac5f5c987747ef4b124ae0c26e70a9e45-merged.mount: Deactivated successfully.
Dec 05 09:44:09 compute-0 podman[76419]: 2025-12-05 09:44:09.541360927 +0000 UTC m=+0.545037518 container remove 131097411b7180a85a6703811e10bdae7e35846cffc322194cc69bdcc0d3920a (image=quay.io/ceph/ceph:v19, name=interesting_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:09 compute-0 systemd[1]: libpod-conmon-131097411b7180a85a6703811e10bdae7e35846cffc322194cc69bdcc0d3920a.scope: Deactivated successfully.
Dec 05 09:44:09 compute-0 podman[76583]: 2025-12-05 09:44:09.608667771 +0000 UTC m=+0.045719603 container create 33ed044de3fb3ba7be3f2886f55777b49a2b03685c535c1a30766f1c6a024f3a (image=quay.io/ceph/ceph:v19, name=nostalgic_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:44:09 compute-0 systemd[1]: Started libpod-conmon-33ed044de3fb3ba7be3f2886f55777b49a2b03685c535c1a30766f1c6a024f3a.scope.
Dec 05 09:44:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8205227b4c47a93db7e50dba03741a1582db99d3fd82045c8002555ea91f37/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8205227b4c47a93db7e50dba03741a1582db99d3fd82045c8002555ea91f37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8205227b4c47a93db7e50dba03741a1582db99d3fd82045c8002555ea91f37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:09 compute-0 podman[76606]: 2025-12-05 09:44:09.656363737 +0000 UTC m=+0.051945383 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:44:09 compute-0 podman[76583]: 2025-12-05 09:44:09.667160233 +0000 UTC m=+0.104212085 container init 33ed044de3fb3ba7be3f2886f55777b49a2b03685c535c1a30766f1c6a024f3a (image=quay.io/ceph/ceph:v19, name=nostalgic_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 09:44:09 compute-0 podman[76583]: 2025-12-05 09:44:09.67290072 +0000 UTC m=+0.109952552 container start 33ed044de3fb3ba7be3f2886f55777b49a2b03685c535c1a30766f1c6a024f3a (image=quay.io/ceph/ceph:v19, name=nostalgic_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 05 09:44:09 compute-0 podman[76583]: 2025-12-05 09:44:09.676388445 +0000 UTC m=+0.113440297 container attach 33ed044de3fb3ba7be3f2886f55777b49a2b03685c535c1a30766f1c6a024f3a (image=quay.io/ceph/ceph:v19, name=nostalgic_shamir, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:44:09 compute-0 podman[76583]: 2025-12-05 09:44:09.587755969 +0000 UTC m=+0.024807831 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:09 compute-0 podman[76606]: 2025-12-05 09:44:09.767202063 +0000 UTC m=+0.162783689 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:09 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:44:09 compute-0 sudo[76464]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:44:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:09 compute-0 sudo[76680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:09 compute-0 sudo[76680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:09 compute-0 sudo[76680]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:10 compute-0 sudo[76705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:44:10 compute-0 sudo[76705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:10 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 05 09:44:10 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:10 compute-0 systemd[1]: libpod-33ed044de3fb3ba7be3f2886f55777b49a2b03685c535c1a30766f1c6a024f3a.scope: Deactivated successfully.
Dec 05 09:44:10 compute-0 conmon[76622]: conmon 33ed044de3fb3ba7be3f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33ed044de3fb3ba7be3f2886f55777b49a2b03685c535c1a30766f1c6a024f3a.scope/container/memory.events
Dec 05 09:44:10 compute-0 podman[76583]: 2025-12-05 09:44:10.080785202 +0000 UTC m=+0.517837034 container died 33ed044de3fb3ba7be3f2886f55777b49a2b03685c535c1a30766f1c6a024f3a (image=quay.io/ceph/ceph:v19, name=nostalgic_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa8205227b4c47a93db7e50dba03741a1582db99d3fd82045c8002555ea91f37-merged.mount: Deactivated successfully.
Dec 05 09:44:10 compute-0 podman[76583]: 2025-12-05 09:44:10.12710194 +0000 UTC m=+0.564153772 container remove 33ed044de3fb3ba7be3f2886f55777b49a2b03685c535c1a30766f1c6a024f3a (image=quay.io/ceph/ceph:v19, name=nostalgic_shamir, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 09:44:10 compute-0 systemd[1]: libpod-conmon-33ed044de3fb3ba7be3f2886f55777b49a2b03685c535c1a30766f1c6a024f3a.scope: Deactivated successfully.
Dec 05 09:44:10 compute-0 podman[76746]: 2025-12-05 09:44:10.193333534 +0000 UTC m=+0.041595261 container create 2cead857257c582751f4f4c2fd6b054ad58533d6388e397a350d630d25b8b626 (image=quay.io/ceph/ceph:v19, name=vibrant_jennings, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 09:44:10 compute-0 systemd[1]: Started libpod-conmon-2cead857257c582751f4f4c2fd6b054ad58533d6388e397a350d630d25b8b626.scope.
Dec 05 09:44:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a4c505e9ae3e555b76d4c264d63763d72c1cf460e5ed8ef12c0d7ffd548d23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a4c505e9ae3e555b76d4c264d63763d72c1cf460e5ed8ef12c0d7ffd548d23/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a4c505e9ae3e555b76d4c264d63763d72c1cf460e5ed8ef12c0d7ffd548d23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:10 compute-0 ceph-mon[74418]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:10 compute-0 ceph-mon[74418]: Saving service crash spec with placement *
Dec 05 09:44:10 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/100530455' entity='client.admin' 
Dec 05 09:44:10 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:10 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:10 compute-0 podman[76746]: 2025-12-05 09:44:10.252253098 +0000 UTC m=+0.100514855 container init 2cead857257c582751f4f4c2fd6b054ad58533d6388e397a350d630d25b8b626 (image=quay.io/ceph/ceph:v19, name=vibrant_jennings, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 09:44:10 compute-0 podman[76746]: 2025-12-05 09:44:10.257229704 +0000 UTC m=+0.105491441 container start 2cead857257c582751f4f4c2fd6b054ad58533d6388e397a350d630d25b8b626 (image=quay.io/ceph/ceph:v19, name=vibrant_jennings, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 05 09:44:10 compute-0 podman[76746]: 2025-12-05 09:44:10.260580486 +0000 UTC m=+0.108842243 container attach 2cead857257c582751f4f4c2fd6b054ad58533d6388e397a350d630d25b8b626 (image=quay.io/ceph/ceph:v19, name=vibrant_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 09:44:10 compute-0 podman[76746]: 2025-12-05 09:44:10.175037542 +0000 UTC m=+0.023299299 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:10 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76779 (sysctl)
Dec 05 09:44:10 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 05 09:44:10 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 05 09:44:10 compute-0 sudo[76705]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:10 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:44:10 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:10 compute-0 ceph-mgr[74711]: [cephadm INFO root] Added label _admin to host compute-0
Dec 05 09:44:10 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 05 09:44:10 compute-0 vibrant_jennings[76762]: Added label _admin to host compute-0
Dec 05 09:44:10 compute-0 systemd[1]: libpod-2cead857257c582751f4f4c2fd6b054ad58533d6388e397a350d630d25b8b626.scope: Deactivated successfully.
Dec 05 09:44:10 compute-0 podman[76746]: 2025-12-05 09:44:10.649820276 +0000 UTC m=+0.498082013 container died 2cead857257c582751f4f4c2fd6b054ad58533d6388e397a350d630d25b8b626 (image=quay.io/ceph/ceph:v19, name=vibrant_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 09:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5a4c505e9ae3e555b76d4c264d63763d72c1cf460e5ed8ef12c0d7ffd548d23-merged.mount: Deactivated successfully.
Dec 05 09:44:10 compute-0 sudo[76822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:10 compute-0 podman[76746]: 2025-12-05 09:44:10.695674892 +0000 UTC m=+0.543936629 container remove 2cead857257c582751f4f4c2fd6b054ad58533d6388e397a350d630d25b8b626 (image=quay.io/ceph/ceph:v19, name=vibrant_jennings, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:10 compute-0 sudo[76822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:10 compute-0 systemd[1]: libpod-conmon-2cead857257c582751f4f4c2fd6b054ad58533d6388e397a350d630d25b8b626.scope: Deactivated successfully.
Dec 05 09:44:10 compute-0 sudo[76822]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:10 compute-0 sudo[76859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 05 09:44:10 compute-0 sudo[76859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:10 compute-0 podman[76858]: 2025-12-05 09:44:10.76317185 +0000 UTC m=+0.043046839 container create 97e2799f702179927e3d647f1416bddf8fa116068712c617321f2b39d0d6fc15 (image=quay.io/ceph/ceph:v19, name=sweet_joliot, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:10 compute-0 systemd[1]: Started libpod-conmon-97e2799f702179927e3d647f1416bddf8fa116068712c617321f2b39d0d6fc15.scope.
Dec 05 09:44:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d8480e1b29d948ca37823197a4a865c6e4dade696e22ca33a8a1b3f51a04ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d8480e1b29d948ca37823197a4a865c6e4dade696e22ca33a8a1b3f51a04ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d8480e1b29d948ca37823197a4a865c6e4dade696e22ca33a8a1b3f51a04ec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:10 compute-0 podman[76858]: 2025-12-05 09:44:10.743277255 +0000 UTC m=+0.023152264 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:10 compute-0 podman[76858]: 2025-12-05 09:44:10.860808034 +0000 UTC m=+0.140683043 container init 97e2799f702179927e3d647f1416bddf8fa116068712c617321f2b39d0d6fc15 (image=quay.io/ceph/ceph:v19, name=sweet_joliot, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 05 09:44:10 compute-0 podman[76858]: 2025-12-05 09:44:10.867742025 +0000 UTC m=+0.147617014 container start 97e2799f702179927e3d647f1416bddf8fa116068712c617321f2b39d0d6fc15 (image=quay.io/ceph/ceph:v19, name=sweet_joliot, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 09:44:10 compute-0 podman[76858]: 2025-12-05 09:44:10.871472467 +0000 UTC m=+0.151347456 container attach 97e2799f702179927e3d647f1416bddf8fa116068712c617321f2b39d0d6fc15 (image=quay.io/ceph/ceph:v19, name=sweet_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 05 09:44:11 compute-0 sudo[76859]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:44:11 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:11 compute-0 sudo[76941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:11 compute-0 sudo[76941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:11 compute-0 sudo[76941]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:11 compute-0 sudo[76966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- inventory --format=json-pretty --filter-for-batch
Dec 05 09:44:11 compute-0 sudo[76966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:11 compute-0 ceph-mon[74418]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:11 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:11 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 05 09:44:11 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2067817219' entity='client.admin' 
Dec 05 09:44:11 compute-0 sweet_joliot[76900]: set mgr/dashboard/cluster/status
Dec 05 09:44:11 compute-0 systemd[1]: libpod-97e2799f702179927e3d647f1416bddf8fa116068712c617321f2b39d0d6fc15.scope: Deactivated successfully.
Dec 05 09:44:11 compute-0 podman[76858]: 2025-12-05 09:44:11.339995288 +0000 UTC m=+0.619870287 container died 97e2799f702179927e3d647f1416bddf8fa116068712c617321f2b39d0d6fc15 (image=quay.io/ceph/ceph:v19, name=sweet_joliot, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-58d8480e1b29d948ca37823197a4a865c6e4dade696e22ca33a8a1b3f51a04ec-merged.mount: Deactivated successfully.
Dec 05 09:44:11 compute-0 podman[76858]: 2025-12-05 09:44:11.396969079 +0000 UTC m=+0.676844068 container remove 97e2799f702179927e3d647f1416bddf8fa116068712c617321f2b39d0d6fc15 (image=quay.io/ceph/ceph:v19, name=sweet_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:44:11 compute-0 systemd[1]: libpod-conmon-97e2799f702179927e3d647f1416bddf8fa116068712c617321f2b39d0d6fc15.scope: Deactivated successfully.
Dec 05 09:44:11 compute-0 sudo[73347]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:11 compute-0 podman[77040]: 2025-12-05 09:44:11.549850696 +0000 UTC m=+0.039161494 container create 5ec52fb6c6d1a1bf674a66585b18ebc041f369e110ce980acdd8fd1b0a43b4c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 09:44:11 compute-0 systemd[1]: Started libpod-conmon-5ec52fb6c6d1a1bf674a66585b18ebc041f369e110ce980acdd8fd1b0a43b4c1.scope.
Dec 05 09:44:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:11 compute-0 podman[77040]: 2025-12-05 09:44:11.608081911 +0000 UTC m=+0.097392709 container init 5ec52fb6c6d1a1bf674a66585b18ebc041f369e110ce980acdd8fd1b0a43b4c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:44:11 compute-0 podman[77040]: 2025-12-05 09:44:11.614087875 +0000 UTC m=+0.103398653 container start 5ec52fb6c6d1a1bf674a66585b18ebc041f369e110ce980acdd8fd1b0a43b4c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_thompson, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 09:44:11 compute-0 unruffled_thompson[77056]: 167 167
Dec 05 09:44:11 compute-0 systemd[1]: libpod-5ec52fb6c6d1a1bf674a66585b18ebc041f369e110ce980acdd8fd1b0a43b4c1.scope: Deactivated successfully.
Dec 05 09:44:11 compute-0 podman[77040]: 2025-12-05 09:44:11.621317824 +0000 UTC m=+0.110628622 container attach 5ec52fb6c6d1a1bf674a66585b18ebc041f369e110ce980acdd8fd1b0a43b4c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_thompson, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 05 09:44:11 compute-0 podman[77040]: 2025-12-05 09:44:11.62191926 +0000 UTC m=+0.111230038 container died 5ec52fb6c6d1a1bf674a66585b18ebc041f369e110ce980acdd8fd1b0a43b4c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:11 compute-0 podman[77040]: 2025-12-05 09:44:11.530979569 +0000 UTC m=+0.020290367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e06a94a3554209fa5c6b66da8a43df197e95eeb1d083800145f53d9c4b363e9-merged.mount: Deactivated successfully.
Dec 05 09:44:11 compute-0 podman[77040]: 2025-12-05 09:44:11.658631645 +0000 UTC m=+0.147942433 container remove 5ec52fb6c6d1a1bf674a66585b18ebc041f369e110ce980acdd8fd1b0a43b4c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_thompson, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 09:44:11 compute-0 systemd[1]: libpod-conmon-5ec52fb6c6d1a1bf674a66585b18ebc041f369e110ce980acdd8fd1b0a43b4c1.scope: Deactivated successfully.
Dec 05 09:44:11 compute-0 podman[77079]: 2025-12-05 09:44:11.844011682 +0000 UTC m=+0.052083527 container create 7d190033a772d3c68673236edaff9d2b03722f54bc1c72ce0b6d611df765fb05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:44:11 compute-0 systemd[1]: Started libpod-conmon-7d190033a772d3c68673236edaff9d2b03722f54bc1c72ce0b6d611df765fb05.scope.
Dec 05 09:44:11 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:44:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9c29c2fe49e95d62ee5fca7b66459a673b8788b078f0de38e397f61d2053fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9c29c2fe49e95d62ee5fca7b66459a673b8788b078f0de38e397f61d2053fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9c29c2fe49e95d62ee5fca7b66459a673b8788b078f0de38e397f61d2053fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9c29c2fe49e95d62ee5fca7b66459a673b8788b078f0de38e397f61d2053fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:11 compute-0 podman[77079]: 2025-12-05 09:44:11.920061055 +0000 UTC m=+0.128132900 container init 7d190033a772d3c68673236edaff9d2b03722f54bc1c72ce0b6d611df765fb05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kalam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 09:44:11 compute-0 podman[77079]: 2025-12-05 09:44:11.825278979 +0000 UTC m=+0.033350834 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:44:11 compute-0 podman[77079]: 2025-12-05 09:44:11.926624716 +0000 UTC m=+0.134696551 container start 7d190033a772d3c68673236edaff9d2b03722f54bc1c72ce0b6d611df765fb05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 09:44:11 compute-0 podman[77079]: 2025-12-05 09:44:11.929750931 +0000 UTC m=+0.137822796 container attach 7d190033a772d3c68673236edaff9d2b03722f54bc1c72ce0b6d611df765fb05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 09:44:11 compute-0 sudo[77122]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqauofnzjwznupfmybqecaxsdkzglury ; /usr/bin/python3'
Dec 05 09:44:11 compute-0 sudo[77122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:12 compute-0 python3[77125]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:12 compute-0 podman[77126]: 2025-12-05 09:44:12.137800399 +0000 UTC m=+0.025871539 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]: [
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:     {
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:         "available": false,
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:         "being_replaced": false,
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:         "ceph_device_lvm": false,
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:         "lsm_data": {},
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:         "lvs": [],
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:         "path": "/dev/sr0",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:         "rejected_reasons": [
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "Has a FileSystem",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "Insufficient space (<5GB)"
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:         ],
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:         "sys_api": {
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "actuators": null,
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "device_nodes": [
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:                 "sr0"
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             ],
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "devname": "sr0",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "human_readable_size": "482.00 KB",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "id_bus": "ata",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "model": "QEMU DVD-ROM",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "nr_requests": "2",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "parent": "/dev/sr0",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "partitions": {},
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "path": "/dev/sr0",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "removable": "1",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "rev": "2.5+",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "ro": "0",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "rotational": "1",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "sas_address": "",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "sas_device_handle": "",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "scheduler_mode": "mq-deadline",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "sectors": 0,
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "sectorsize": "2048",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "size": 493568.0,
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "support_discard": "2048",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "type": "disk",
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:             "vendor": "QEMU"
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:         }
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]:     }
Dec 05 09:44:12 compute-0 upbeat_kalam[77096]: ]
Dec 05 09:44:12 compute-0 systemd[1]: libpod-7d190033a772d3c68673236edaff9d2b03722f54bc1c72ce0b6d611df765fb05.scope: Deactivated successfully.
Dec 05 09:44:12 compute-0 conmon[77096]: conmon 7d190033a772d3c68673 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d190033a772d3c68673236edaff9d2b03722f54bc1c72ce0b6d611df765fb05.scope/container/memory.events
Dec 05 09:44:13 compute-0 podman[77126]: 2025-12-05 09:44:13.061337913 +0000 UTC m=+0.949409043 container create dfc0f16aa8cd41dd86e43444d883251752ef56122b2fd67fa953073c431cce3f (image=quay.io/ceph/ceph:v19, name=loving_archimedes, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:13 compute-0 ceph-mon[74418]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:13 compute-0 ceph-mon[74418]: Added label _admin to host compute-0
Dec 05 09:44:13 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2067817219' entity='client.admin' 
Dec 05 09:44:13 compute-0 podman[77079]: 2025-12-05 09:44:13.209703366 +0000 UTC m=+1.417775201 container died 7d190033a772d3c68673236edaff9d2b03722f54bc1c72ce0b6d611df765fb05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 09:44:13 compute-0 systemd[1]: Started libpod-conmon-dfc0f16aa8cd41dd86e43444d883251752ef56122b2fd67fa953073c431cce3f.scope.
Dec 05 09:44:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b058e81e0bcb58c76ecc830be468278d04e3f316ba110bea7b9c3bc1c7593ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b058e81e0bcb58c76ecc830be468278d04e3f316ba110bea7b9c3bc1c7593ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd9c29c2fe49e95d62ee5fca7b66459a673b8788b078f0de38e397f61d2053fe-merged.mount: Deactivated successfully.
Dec 05 09:44:13 compute-0 podman[77079]: 2025-12-05 09:44:13.511054539 +0000 UTC m=+1.719126404 container remove 7d190033a772d3c68673236edaff9d2b03722f54bc1c72ce0b6d611df765fb05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Dec 05 09:44:13 compute-0 systemd[1]: libpod-conmon-7d190033a772d3c68673236edaff9d2b03722f54bc1c72ce0b6d611df765fb05.scope: Deactivated successfully.
Dec 05 09:44:13 compute-0 sudo[76966]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:44:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:44:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:44:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:13 compute-0 podman[77126]: 2025-12-05 09:44:13.617136915 +0000 UTC m=+1.505208055 container init dfc0f16aa8cd41dd86e43444d883251752ef56122b2fd67fa953073c431cce3f (image=quay.io/ceph/ceph:v19, name=loving_archimedes, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:44:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:44:13 compute-0 podman[77126]: 2025-12-05 09:44:13.627133339 +0000 UTC m=+1.515204459 container start dfc0f16aa8cd41dd86e43444d883251752ef56122b2fd67fa953073c431cce3f (image=quay.io/ceph/ceph:v19, name=loving_archimedes, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 09:44:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 05 09:44:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 09:44:13 compute-0 podman[77126]: 2025-12-05 09:44:13.632489166 +0000 UTC m=+1.520560336 container attach dfc0f16aa8cd41dd86e43444d883251752ef56122b2fd67fa953073c431cce3f (image=quay.io/ceph/ceph:v19, name=loving_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 09:44:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:44:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:44:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:44:13 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:44:13 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:44:13 compute-0 sudo[78097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:44:13 compute-0 sudo[78097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:13 compute-0 sudo[78097]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:13 compute-0 sudo[78122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph
Dec 05 09:44:13 compute-0 sudo[78122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:13 compute-0 sudo[78122]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:13 compute-0 sudo[78166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:44:13 compute-0 sudo[78166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:13 compute-0 sudo[78166]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:13 compute-0 sudo[78191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:44:13 compute-0 sudo[78191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:13 compute-0 sudo[78191]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:13 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:44:13 compute-0 sudo[78216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:44:13 compute-0 sudo[78216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:13 compute-0 sudo[78216]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 05 09:44:14 compute-0 sudo[78265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:44:14 compute-0 sudo[78265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78265]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 sudo[78290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:44:14 compute-0 sudo[78290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78290]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3175854948' entity='client.admin' 
Dec 05 09:44:14 compute-0 systemd[1]: libpod-dfc0f16aa8cd41dd86e43444d883251752ef56122b2fd67fa953073c431cce3f.scope: Deactivated successfully.
Dec 05 09:44:14 compute-0 podman[77126]: 2025-12-05 09:44:14.141890396 +0000 UTC m=+2.029961526 container died dfc0f16aa8cd41dd86e43444d883251752ef56122b2fd67fa953073c431cce3f (image=quay.io/ceph/ceph:v19, name=loving_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 09:44:14 compute-0 sudo[78316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:44:14 compute-0 sudo[78316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78316]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:44:14 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:44:14 compute-0 sudo[78352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:44:14 compute-0 sudo[78352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b058e81e0bcb58c76ecc830be468278d04e3f316ba110bea7b9c3bc1c7593ef-merged.mount: Deactivated successfully.
Dec 05 09:44:14 compute-0 sudo[78352]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 podman[77126]: 2025-12-05 09:44:14.248096685 +0000 UTC m=+2.136167805 container remove dfc0f16aa8cd41dd86e43444d883251752ef56122b2fd67fa953073c431cce3f (image=quay.io/ceph/ceph:v19, name=loving_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 09:44:14 compute-0 systemd[1]: libpod-conmon-dfc0f16aa8cd41dd86e43444d883251752ef56122b2fd67fa953073c431cce3f.scope: Deactivated successfully.
Dec 05 09:44:14 compute-0 sudo[77122]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 sudo[78380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:44:14 compute-0 sudo[78380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78380]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 sudo[78405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:44:14 compute-0 sudo[78405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78405]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 sudo[78430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:44:14 compute-0 sudo[78430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78430]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 sudo[78455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:44:14 compute-0 sudo[78455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78455]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 sudo[78503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:44:14 compute-0 sudo[78503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78503]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 sudo[78528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:44:14 compute-0 sudo[78528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78528]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 sudo[78555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:44:14 compute-0 sudo[78555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78555]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:44:14 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:44:14 compute-0 sudo[78607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:44:14 compute-0 sudo[78607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78607]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 sudo[78656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph
Dec 05 09:44:14 compute-0 sudo[78656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78656]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 sudo[78703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:44:14 compute-0 sudo[78703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78703]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 sudo[78728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:44:14 compute-0 sudo[78728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78728]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:14 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:14 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:14 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:14 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 09:44:14 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:14 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:44:14 compute-0 ceph-mon[74418]: Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:44:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3175854948' entity='client.admin' 
Dec 05 09:44:14 compute-0 ceph-mon[74418]: Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:44:14 compute-0 sudo[78753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:44:14 compute-0 sudo[78753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:14 compute-0 sudo[78753]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 sudo[78818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:44:15 compute-0 sudo[78818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 sudo[78818]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 sudo[78857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:44:15 compute-0 sudo[78857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 sudo[78857]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 sudo[78946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppabzrjbuasyjgczaonaqbwiygzzinyn ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927854.6757174-37121-154203528307006/async_wrapper.py j452730286647 30 /home/zuul/.ansible/tmp/ansible-tmp-1764927854.6757174-37121-154203528307006/AnsiballZ_command.py _'
Dec 05 09:44:15 compute-0 sudo[78946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:15 compute-0 sudo[78905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 09:44:15 compute-0 sudo[78905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 sudo[78905]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:44:15 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:44:15 compute-0 sudo[78951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:44:15 compute-0 sudo[78951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 sudo[78951]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 ansible-async_wrapper.py[78948]: Invoked with j452730286647 30 /home/zuul/.ansible/tmp/ansible-tmp-1764927854.6757174-37121-154203528307006/AnsiballZ_command.py _
Dec 05 09:44:15 compute-0 ansible-async_wrapper.py[79001]: Starting module and watcher
Dec 05 09:44:15 compute-0 ansible-async_wrapper.py[79001]: Start watching 79002 (30)
Dec 05 09:44:15 compute-0 ansible-async_wrapper.py[79002]: Start module (79002)
Dec 05 09:44:15 compute-0 ansible-async_wrapper.py[78948]: Return async_wrapper task started.
Dec 05 09:44:15 compute-0 sudo[78976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:44:15 compute-0 sudo[78976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 sudo[78976]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 sudo[78946]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 sudo[79006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:44:15 compute-0 sudo[79006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 sudo[79006]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 sudo[79031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:44:15 compute-0 sudo[79031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 sudo[79031]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 python3[79004]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:15 compute-0 sudo[79056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:44:15 compute-0 sudo[79056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 sudo[79056]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 podman[79079]: 2025-12-05 09:44:15.588362073 +0000 UTC m=+0.043751410 container create 136d0adbcb40d89b7e0247ea1c14c498ac1f8c8816f27d895c43ec5413127164 (image=quay.io/ceph/ceph:v19, name=condescending_khorana, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec 05 09:44:15 compute-0 systemd[1]: Started libpod-conmon-136d0adbcb40d89b7e0247ea1c14c498ac1f8c8816f27d895c43ec5413127164.scope.
Dec 05 09:44:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:15 compute-0 sudo[79117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:44:15 compute-0 sudo[79117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f48a19e4c4e36b3d866845225823bce5e30b0c94e77169167eb2685f718eb90b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f48a19e4c4e36b3d866845225823bce5e30b0c94e77169167eb2685f718eb90b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:15 compute-0 sudo[79117]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 podman[79079]: 2025-12-05 09:44:15.57075469 +0000 UTC m=+0.026144047 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:15 compute-0 podman[79079]: 2025-12-05 09:44:15.672537358 +0000 UTC m=+0.127926715 container init 136d0adbcb40d89b7e0247ea1c14c498ac1f8c8816f27d895c43ec5413127164 (image=quay.io/ceph/ceph:v19, name=condescending_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 05 09:44:15 compute-0 podman[79079]: 2025-12-05 09:44:15.678189803 +0000 UTC m=+0.133579140 container start 136d0adbcb40d89b7e0247ea1c14c498ac1f8c8816f27d895c43ec5413127164 (image=quay.io/ceph/ceph:v19, name=condescending_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 05 09:44:15 compute-0 podman[79079]: 2025-12-05 09:44:15.680891587 +0000 UTC m=+0.136280944 container attach 136d0adbcb40d89b7e0247ea1c14c498ac1f8c8816f27d895c43ec5413127164 (image=quay.io/ceph/ceph:v19, name=condescending_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:44:15 compute-0 sudo[79147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:44:15 compute-0 sudo[79147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 sudo[79147]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 sudo[79173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:44:15 compute-0 sudo[79173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 sudo[79173]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:44:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:44:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:44:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:15 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 5b4d36ac-3ab8-44a5-aec4-146dcb3c81ab (Updating crash deployment (+1 -> 1))
Dec 05 09:44:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:44:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 09:44:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 05 09:44:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:44:15 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:15 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 05 09:44:15 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 05 09:44:15 compute-0 sudo[79217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:15 compute-0 sudo[79217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 sudo[79217]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 compute-0 ceph-mgr[74711]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 05 09:44:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:15 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 05 09:44:15 compute-0 sudo[79242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:44:15 compute-0 sudo[79242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:15 compute-0 ceph-mon[74418]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:44:15 compute-0 ceph-mon[74418]: Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:44:15 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:15 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:15 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:15 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 09:44:15 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 05 09:44:15 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:15 compute-0 ceph-mon[74418]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 05 09:44:16 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:44:16 compute-0 condescending_khorana[79142]: 
Dec 05 09:44:16 compute-0 condescending_khorana[79142]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 05 09:44:16 compute-0 systemd[1]: libpod-136d0adbcb40d89b7e0247ea1c14c498ac1f8c8816f27d895c43ec5413127164.scope: Deactivated successfully.
Dec 05 09:44:16 compute-0 podman[79079]: 2025-12-05 09:44:16.108601981 +0000 UTC m=+0.563991418 container died 136d0adbcb40d89b7e0247ea1c14c498ac1f8c8816f27d895c43ec5413127164 (image=quay.io/ceph/ceph:v19, name=condescending_khorana, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f48a19e4c4e36b3d866845225823bce5e30b0c94e77169167eb2685f718eb90b-merged.mount: Deactivated successfully.
Dec 05 09:44:16 compute-0 podman[79079]: 2025-12-05 09:44:16.175567315 +0000 UTC m=+0.630956662 container remove 136d0adbcb40d89b7e0247ea1c14c498ac1f8c8816f27d895c43ec5413127164 (image=quay.io/ceph/ceph:v19, name=condescending_khorana, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 05 09:44:16 compute-0 systemd[1]: libpod-conmon-136d0adbcb40d89b7e0247ea1c14c498ac1f8c8816f27d895c43ec5413127164.scope: Deactivated successfully.
Dec 05 09:44:16 compute-0 ansible-async_wrapper.py[79002]: Module complete (79002)
Dec 05 09:44:16 compute-0 podman[79318]: 2025-12-05 09:44:16.309131463 +0000 UTC m=+0.038457654 container create 127a9d5cc088e693cee45ffb8f79a86bb66b92fd77c5519162c5309e5e526e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_liskov, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 09:44:16 compute-0 systemd[1]: Started libpod-conmon-127a9d5cc088e693cee45ffb8f79a86bb66b92fd77c5519162c5309e5e526e27.scope.
Dec 05 09:44:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:16 compute-0 podman[79318]: 2025-12-05 09:44:16.3754608 +0000 UTC m=+0.104787021 container init 127a9d5cc088e693cee45ffb8f79a86bb66b92fd77c5519162c5309e5e526e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:44:16 compute-0 podman[79318]: 2025-12-05 09:44:16.380795626 +0000 UTC m=+0.110121817 container start 127a9d5cc088e693cee45ffb8f79a86bb66b92fd77c5519162c5309e5e526e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:44:16 compute-0 vigilant_liskov[79334]: 167 167
Dec 05 09:44:16 compute-0 podman[79318]: 2025-12-05 09:44:16.384101527 +0000 UTC m=+0.113427738 container attach 127a9d5cc088e693cee45ffb8f79a86bb66b92fd77c5519162c5309e5e526e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_liskov, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:44:16 compute-0 systemd[1]: libpod-127a9d5cc088e693cee45ffb8f79a86bb66b92fd77c5519162c5309e5e526e27.scope: Deactivated successfully.
Dec 05 09:44:16 compute-0 podman[79318]: 2025-12-05 09:44:16.384796635 +0000 UTC m=+0.114122826 container died 127a9d5cc088e693cee45ffb8f79a86bb66b92fd77c5519162c5309e5e526e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:44:16 compute-0 podman[79318]: 2025-12-05 09:44:16.291814579 +0000 UTC m=+0.021140810 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1997c434c6e71a0fe9876443041749ce48cd35e672da9af8fe463c2966b1aa3-merged.mount: Deactivated successfully.
Dec 05 09:44:16 compute-0 podman[79318]: 2025-12-05 09:44:16.414557691 +0000 UTC m=+0.143883882 container remove 127a9d5cc088e693cee45ffb8f79a86bb66b92fd77c5519162c5309e5e526e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_liskov, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:16 compute-0 systemd[1]: libpod-conmon-127a9d5cc088e693cee45ffb8f79a86bb66b92fd77c5519162c5309e5e526e27.scope: Deactivated successfully.
Dec 05 09:44:16 compute-0 systemd[1]: Reloading.
Dec 05 09:44:16 compute-0 systemd-rc-local-generator[79399]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:44:16 compute-0 systemd-sysv-generator[79402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:44:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:16 compute-0 sudo[79433]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctajlwozgihsxjounrponkwiokgjttm ; /usr/bin/python3'
Dec 05 09:44:16 compute-0 sudo[79433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:16 compute-0 systemd[1]: Reloading.
Dec 05 09:44:16 compute-0 systemd-rc-local-generator[79464]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:44:16 compute-0 systemd-sysv-generator[79468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:44:16 compute-0 python3[79439]: ansible-ansible.legacy.async_status Invoked with jid=j452730286647.78948 mode=status _async_dir=/root/.ansible_async
Dec 05 09:44:16 compute-0 sudo[79433]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:16 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:44:16 compute-0 sudo[79523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxxtkafltopckjghpafqthvuxqpgtagm ; /usr/bin/python3'
Dec 05 09:44:16 compute-0 sudo[79523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:16 compute-0 ceph-mon[74418]: Deploying daemon crash.compute-0 on compute-0
Dec 05 09:44:16 compute-0 ceph-mon[74418]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:16 compute-0 ceph-mon[74418]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:44:17 compute-0 python3[79528]: ansible-ansible.legacy.async_status Invoked with jid=j452730286647.78948 mode=cleanup _async_dir=/root/.ansible_async
Dec 05 09:44:17 compute-0 sudo[79523]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:17 compute-0 podman[79571]: 2025-12-05 09:44:17.14914869 +0000 UTC m=+0.042994580 container create b271b4e2be816eabed12a51d98e32d8bc74e281d49fb230057b52ce92b774e02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:44:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed25e188d5ea38a3c49799d4dcceb8ff9afdbd09a8862bad02783a76bdcba3cb/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed25e188d5ea38a3c49799d4dcceb8ff9afdbd09a8862bad02783a76bdcba3cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed25e188d5ea38a3c49799d4dcceb8ff9afdbd09a8862bad02783a76bdcba3cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed25e188d5ea38a3c49799d4dcceb8ff9afdbd09a8862bad02783a76bdcba3cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:17 compute-0 podman[79571]: 2025-12-05 09:44:17.206194832 +0000 UTC m=+0.100040742 container init b271b4e2be816eabed12a51d98e32d8bc74e281d49fb230057b52ce92b774e02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 05 09:44:17 compute-0 podman[79571]: 2025-12-05 09:44:17.210811498 +0000 UTC m=+0.104657388 container start b271b4e2be816eabed12a51d98e32d8bc74e281d49fb230057b52ce92b774e02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 05 09:44:17 compute-0 bash[79571]: b271b4e2be816eabed12a51d98e32d8bc74e281d49fb230057b52ce92b774e02
Dec 05 09:44:17 compute-0 podman[79571]: 2025-12-05 09:44:17.131091934 +0000 UTC m=+0.024937854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:44:17 compute-0 systemd[1]: Started Ceph crash.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:44:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 05 09:44:17 compute-0 sudo[79242]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:44:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:44:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 05 09:44:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:17 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 5b4d36ac-3ab8-44a5-aec4-146dcb3c81ab (Updating crash deployment (+1 -> 1))
Dec 05 09:44:17 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 5b4d36ac-3ab8-44a5-aec4-146dcb3c81ab (Updating crash deployment (+1 -> 1)) in 1 seconds
Dec 05 09:44:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 05 09:44:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 05 09:44:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 05 09:44:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: 2025-12-05T09:44:17.359+0000 7f111b233640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 05 09:44:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: 2025-12-05T09:44:17.359+0000 7f111b233640 -1 AuthRegistry(0x7f11140698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 05 09:44:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: 2025-12-05T09:44:17.360+0000 7f111b233640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 05 09:44:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: 2025-12-05T09:44:17.360+0000 7f111b233640 -1 AuthRegistry(0x7f111b231ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 05 09:44:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: 2025-12-05T09:44:17.361+0000 7f1118fa8640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 05 09:44:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: 2025-12-05T09:44:17.361+0000 7f111b233640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 05 09:44:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 05 09:44:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 05 09:44:17 compute-0 sudo[79593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:44:17 compute-0 sudo[79593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:17 compute-0 sudo[79593]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:17 compute-0 sudo[79628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:17 compute-0 sudo[79628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:17 compute-0 sudo[79628]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:17 compute-0 sudo[79675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmbwictmmclqfadvupsnckcffqlqqsuj ; /usr/bin/python3'
Dec 05 09:44:17 compute-0 sudo[79675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:17 compute-0 sudo[79678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 09:44:17 compute-0 sudo[79678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:17 compute-0 python3[79681]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 09:44:17 compute-0 sudo[79675]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:18 compute-0 sudo[79800]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmbwivikwvajhzqyvqxivkncrldgtnxx ; /usr/bin/python3'
Dec 05 09:44:18 compute-0 sudo[79800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:18 compute-0 podman[79803]: 2025-12-05 09:44:18.26718991 +0000 UTC m=+0.057657740 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 python3[79802]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:18 compute-0 podman[79823]: 2025-12-05 09:44:18.351081028 +0000 UTC m=+0.039238596 container create 5d600014918cbe357eecd32e7337d8321cd9323f4a893f49b7e4601c7a95f0a6 (image=quay.io/ceph/ceph:v19, name=agitated_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:18 compute-0 systemd[1]: Started libpod-conmon-5d600014918cbe357eecd32e7337d8321cd9323f4a893f49b7e4601c7a95f0a6.scope.
Dec 05 09:44:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b1bcb18927df060fadd757be6cbba435ebfe338220358fc676e556e8a60c6e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b1bcb18927df060fadd757be6cbba435ebfe338220358fc676e556e8a60c6e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b1bcb18927df060fadd757be6cbba435ebfe338220358fc676e556e8a60c6e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:18 compute-0 podman[79823]: 2025-12-05 09:44:18.333959359 +0000 UTC m=+0.022116917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:18 compute-0 podman[79823]: 2025-12-05 09:44:18.434592805 +0000 UTC m=+0.122750373 container init 5d600014918cbe357eecd32e7337d8321cd9323f4a893f49b7e4601c7a95f0a6 (image=quay.io/ceph/ceph:v19, name=agitated_johnson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 09:44:18 compute-0 podman[79823]: 2025-12-05 09:44:18.440444556 +0000 UTC m=+0.128602104 container start 5d600014918cbe357eecd32e7337d8321cd9323f4a893f49b7e4601c7a95f0a6 (image=quay.io/ceph/ceph:v19, name=agitated_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:44:18 compute-0 podman[79823]: 2025-12-05 09:44:18.444014523 +0000 UTC m=+0.132172091 container attach 5d600014918cbe357eecd32e7337d8321cd9323f4a893f49b7e4601c7a95f0a6 (image=quay.io/ceph/ceph:v19, name=agitated_johnson, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec 05 09:44:18 compute-0 podman[79840]: 2025-12-05 09:44:18.457380549 +0000 UTC m=+0.054784192 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:44:18 compute-0 podman[79803]: 2025-12-05 09:44:18.462191611 +0000 UTC m=+0.252659441 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 09:44:18 compute-0 sudo[79678]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 sudo[79908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:44:18 compute-0 sudo[79908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:18 compute-0 sudo[79908]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:18 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 05 09:44:18 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 09:44:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:44:18 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:18 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 05 09:44:18 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 05 09:44:18 compute-0 sudo[79933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:18 compute-0 sudo[79933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:18 compute-0 sudo[79933]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:18 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:44:18 compute-0 agitated_johnson[79838]: 
Dec 05 09:44:18 compute-0 agitated_johnson[79838]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 05 09:44:18 compute-0 systemd[1]: libpod-5d600014918cbe357eecd32e7337d8321cd9323f4a893f49b7e4601c7a95f0a6.scope: Deactivated successfully.
Dec 05 09:44:18 compute-0 podman[79823]: 2025-12-05 09:44:18.870117873 +0000 UTC m=+0.558275431 container died 5d600014918cbe357eecd32e7337d8321cd9323f4a893f49b7e4601c7a95f0a6 (image=quay.io/ceph/ceph:v19, name=agitated_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 05 09:44:18 compute-0 sudo[79958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:44:18 compute-0 sudo[79958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-74b1bcb18927df060fadd757be6cbba435ebfe338220358fc676e556e8a60c6e-merged.mount: Deactivated successfully.
Dec 05 09:44:18 compute-0 podman[79823]: 2025-12-05 09:44:18.917986624 +0000 UTC m=+0.606144182 container remove 5d600014918cbe357eecd32e7337d8321cd9323f4a893f49b7e4601c7a95f0a6 (image=quay.io/ceph/ceph:v19, name=agitated_johnson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 09:44:18 compute-0 systemd[1]: libpod-conmon-5d600014918cbe357eecd32e7337d8321cd9323f4a893f49b7e4601c7a95f0a6.scope: Deactivated successfully.
Dec 05 09:44:18 compute-0 sudo[79800]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:19 compute-0 podman[80015]: 2025-12-05 09:44:19.181367467 +0000 UTC m=+0.043870361 container create 39db1333a46bd6cb8f94567c6c955ccdada888eb3beef4ee5dac6888e11ad5d6 (image=quay.io/ceph/ceph:v19, name=fervent_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:44:19 compute-0 systemd[1]: Started libpod-conmon-39db1333a46bd6cb8f94567c6c955ccdada888eb3beef4ee5dac6888e11ad5d6.scope.
Dec 05 09:44:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:19 compute-0 sudo[80057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enuxltoypxnyuapkjslpdnapnytajoje ; /usr/bin/python3'
Dec 05 09:44:19 compute-0 podman[80015]: 2025-12-05 09:44:19.257494742 +0000 UTC m=+0.119997666 container init 39db1333a46bd6cb8f94567c6c955ccdada888eb3beef4ee5dac6888e11ad5d6 (image=quay.io/ceph/ceph:v19, name=fervent_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 09:44:19 compute-0 podman[80015]: 2025-12-05 09:44:19.164440324 +0000 UTC m=+0.026943238 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:19 compute-0 sudo[80057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:19 compute-0 podman[80015]: 2025-12-05 09:44:19.264351381 +0000 UTC m=+0.126854265 container start 39db1333a46bd6cb8f94567c6c955ccdada888eb3beef4ee5dac6888e11ad5d6 (image=quay.io/ceph/ceph:v19, name=fervent_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:19 compute-0 fervent_kilby[80035]: 167 167
Dec 05 09:44:19 compute-0 systemd[1]: libpod-39db1333a46bd6cb8f94567c6c955ccdada888eb3beef4ee5dac6888e11ad5d6.scope: Deactivated successfully.
Dec 05 09:44:19 compute-0 python3[80060]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:19 compute-0 podman[80015]: 2025-12-05 09:44:19.95550088 +0000 UTC m=+0.818003794 container attach 39db1333a46bd6cb8f94567c6c955ccdada888eb3beef4ee5dac6888e11ad5d6 (image=quay.io/ceph/ceph:v19, name=fervent_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 09:44:19 compute-0 podman[80015]: 2025-12-05 09:44:19.955902381 +0000 UTC m=+0.818405285 container died 39db1333a46bd6cb8f94567c6c955ccdada888eb3beef4ee5dac6888e11ad5d6 (image=quay.io/ceph/ceph:v19, name=fervent_kilby, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 09:44:19 compute-0 ceph-mon[74418]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 09:44:19 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce377b0f3ae8351c36bc2ac3a6e468dc9e84a4d5084a9f6b6ccb6186488c9005-merged.mount: Deactivated successfully.
Dec 05 09:44:19 compute-0 podman[80015]: 2025-12-05 09:44:19.996670827 +0000 UTC m=+0.859173731 container remove 39db1333a46bd6cb8f94567c6c955ccdada888eb3beef4ee5dac6888e11ad5d6 (image=quay.io/ceph/ceph:v19, name=fervent_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 09:44:20 compute-0 sudo[79958]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:44:20 compute-0 systemd[1]: libpod-conmon-39db1333a46bd6cb8f94567c6c955ccdada888eb3beef4ee5dac6888e11ad5d6.scope: Deactivated successfully.
Dec 05 09:44:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:44:20 compute-0 podman[80073]: 2025-12-05 09:44:20.146293096 +0000 UTC m=+0.741004377 container create b3d48bf8c0a49e7f5321bd7fd27ad0a43244932c616d78c9766f325f0116df3c (image=quay.io/ceph/ceph:v19, name=unruffled_carver, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 09:44:20 compute-0 podman[80073]: 2025-12-05 09:44:20.060378322 +0000 UTC m=+0.655089623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:20 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.hvnxai (unknown last config time)...
Dec 05 09:44:20 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.hvnxai (unknown last config time)...
Dec 05 09:44:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hvnxai", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:44:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hvnxai", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 09:44:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:44:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 09:44:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:44:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:20 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.hvnxai on compute-0
Dec 05 09:44:20 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.hvnxai on compute-0
Dec 05 09:44:20 compute-0 systemd[1]: Started libpod-conmon-b3d48bf8c0a49e7f5321bd7fd27ad0a43244932c616d78c9766f325f0116df3c.scope.
Dec 05 09:44:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa6a0fc30c42e12e4b27b4cc433e72260be15ae03ce4856c14a33cd39ddc976/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa6a0fc30c42e12e4b27b4cc433e72260be15ae03ce4856c14a33cd39ddc976/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa6a0fc30c42e12e4b27b4cc433e72260be15ae03ce4856c14a33cd39ddc976/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:20 compute-0 sudo[80090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:20 compute-0 sudo[80090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:20 compute-0 sudo[80090]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:20 compute-0 podman[80073]: 2025-12-05 09:44:20.277438868 +0000 UTC m=+0.872150159 container init b3d48bf8c0a49e7f5321bd7fd27ad0a43244932c616d78c9766f325f0116df3c (image=quay.io/ceph/ceph:v19, name=unruffled_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 09:44:20 compute-0 podman[80073]: 2025-12-05 09:44:20.282857295 +0000 UTC m=+0.877568586 container start b3d48bf8c0a49e7f5321bd7fd27ad0a43244932c616d78c9766f325f0116df3c (image=quay.io/ceph/ceph:v19, name=unruffled_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:44:20 compute-0 sudo[80118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:44:20 compute-0 podman[80073]: 2025-12-05 09:44:20.287797511 +0000 UTC m=+0.882508832 container attach b3d48bf8c0a49e7f5321bd7fd27ad0a43244932c616d78c9766f325f0116df3c (image=quay.io/ceph/ceph:v19, name=unruffled_carver, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:44:20 compute-0 sudo[80118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:20 compute-0 ansible-async_wrapper.py[79001]: Done in kid B.
Dec 05 09:44:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 05 09:44:20 compute-0 podman[80180]: 2025-12-05 09:44:20.555610966 +0000 UTC m=+0.018694984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:20 compute-0 podman[80180]: 2025-12-05 09:44:20.737853957 +0000 UTC m=+0.200937995 container create ca8420de6f5c74b231a19976a21d4d3ba5b213ce4978d5b6f04423eba110f7ef (image=quay.io/ceph/ceph:v19, name=youthful_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:44:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3709406825' entity='client.admin' 
Dec 05 09:44:20 compute-0 podman[80073]: 2025-12-05 09:44:20.768948709 +0000 UTC m=+1.363660000 container died b3d48bf8c0a49e7f5321bd7fd27ad0a43244932c616d78c9766f325f0116df3c (image=quay.io/ceph/ceph:v19, name=unruffled_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:44:20 compute-0 systemd[1]: libpod-b3d48bf8c0a49e7f5321bd7fd27ad0a43244932c616d78c9766f325f0116df3c.scope: Deactivated successfully.
Dec 05 09:44:20 compute-0 systemd[1]: Started libpod-conmon-ca8420de6f5c74b231a19976a21d4d3ba5b213ce4978d5b6f04423eba110f7ef.scope.
Dec 05 09:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-efa6a0fc30c42e12e4b27b4cc433e72260be15ae03ce4856c14a33cd39ddc976-merged.mount: Deactivated successfully.
Dec 05 09:44:20 compute-0 podman[80073]: 2025-12-05 09:44:20.855145819 +0000 UTC m=+1.449857090 container remove b3d48bf8c0a49e7f5321bd7fd27ad0a43244932c616d78c9766f325f0116df3c (image=quay.io/ceph/ceph:v19, name=unruffled_carver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:20 compute-0 systemd[1]: libpod-conmon-b3d48bf8c0a49e7f5321bd7fd27ad0a43244932c616d78c9766f325f0116df3c.scope: Deactivated successfully.
Dec 05 09:44:20 compute-0 sudo[80057]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:20 compute-0 podman[80180]: 2025-12-05 09:44:20.882557491 +0000 UTC m=+0.345641519 container init ca8420de6f5c74b231a19976a21d4d3ba5b213ce4978d5b6f04423eba110f7ef (image=quay.io/ceph/ceph:v19, name=youthful_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 09:44:20 compute-0 podman[80180]: 2025-12-05 09:44:20.88838908 +0000 UTC m=+0.351473078 container start ca8420de6f5c74b231a19976a21d4d3ba5b213ce4978d5b6f04423eba110f7ef (image=quay.io/ceph/ceph:v19, name=youthful_perlman, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:44:20 compute-0 podman[80180]: 2025-12-05 09:44:20.892149323 +0000 UTC m=+0.355233321 container attach ca8420de6f5c74b231a19976a21d4d3ba5b213ce4978d5b6f04423eba110f7ef (image=quay.io/ceph/ceph:v19, name=youthful_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:44:20 compute-0 youthful_perlman[80213]: 167 167
Dec 05 09:44:20 compute-0 systemd[1]: libpod-ca8420de6f5c74b231a19976a21d4d3ba5b213ce4978d5b6f04423eba110f7ef.scope: Deactivated successfully.
Dec 05 09:44:20 compute-0 podman[80218]: 2025-12-05 09:44:20.931878671 +0000 UTC m=+0.023481784 container died ca8420de6f5c74b231a19976a21d4d3ba5b213ce4978d5b6f04423eba110f7ef (image=quay.io/ceph/ceph:v19, name=youthful_perlman, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2bc8c32ede4164ee3dea4695448b9d280b79415c6fab93a1818e226772bb1a3-merged.mount: Deactivated successfully.
Dec 05 09:44:20 compute-0 ceph-mon[74418]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 05 09:44:20 compute-0 ceph-mon[74418]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 05 09:44:20 compute-0 ceph-mon[74418]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:44:20 compute-0 ceph-mon[74418]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:20 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:20 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:20 compute-0 ceph-mon[74418]: Reconfiguring mgr.compute-0.hvnxai (unknown last config time)...
Dec 05 09:44:20 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hvnxai", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 09:44:20 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 09:44:20 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:20 compute-0 ceph-mon[74418]: Reconfiguring daemon mgr.compute-0.hvnxai on compute-0
Dec 05 09:44:20 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3709406825' entity='client.admin' 
Dec 05 09:44:20 compute-0 podman[80218]: 2025-12-05 09:44:20.972789882 +0000 UTC m=+0.064393015 container remove ca8420de6f5c74b231a19976a21d4d3ba5b213ce4978d5b6f04423eba110f7ef (image=quay.io/ceph/ceph:v19, name=youthful_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:44:20 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 1 completed events
Dec 05 09:44:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:44:20 compute-0 systemd[1]: libpod-conmon-ca8420de6f5c74b231a19976a21d4d3ba5b213ce4978d5b6f04423eba110f7ef.scope: Deactivated successfully.
Dec 05 09:44:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:21 compute-0 sudo[80118]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:44:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:44:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:44:21 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:44:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:44:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:44:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:21 compute-0 sudo[80256]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqdkvbsuadfhzflzbxraedoarhfnbdrx ; /usr/bin/python3'
Dec 05 09:44:21 compute-0 sudo[80256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:21 compute-0 sudo[80257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:44:21 compute-0 sudo[80257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:21 compute-0 sudo[80257]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:21 compute-0 python3[80266]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:21 compute-0 podman[80284]: 2025-12-05 09:44:21.331626459 +0000 UTC m=+0.061986338 container create a624dc262bdf6107e4b2668b04afca8edc34ba71702f417a2c3e15674e5d565e (image=quay.io/ceph/ceph:v19, name=lucid_goldwasser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:44:21 compute-0 systemd[1]: Started libpod-conmon-a624dc262bdf6107e4b2668b04afca8edc34ba71702f417a2c3e15674e5d565e.scope.
Dec 05 09:44:21 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af25f7390e758556de39d6ecfca25885216e50047d45ec51579e01130eaa6b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af25f7390e758556de39d6ecfca25885216e50047d45ec51579e01130eaa6b9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af25f7390e758556de39d6ecfca25885216e50047d45ec51579e01130eaa6b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:21 compute-0 podman[80284]: 2025-12-05 09:44:21.312082894 +0000 UTC m=+0.042442783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:21 compute-0 podman[80284]: 2025-12-05 09:44:21.416655098 +0000 UTC m=+0.147015007 container init a624dc262bdf6107e4b2668b04afca8edc34ba71702f417a2c3e15674e5d565e (image=quay.io/ceph/ceph:v19, name=lucid_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 09:44:21 compute-0 podman[80284]: 2025-12-05 09:44:21.423537727 +0000 UTC m=+0.153897606 container start a624dc262bdf6107e4b2668b04afca8edc34ba71702f417a2c3e15674e5d565e (image=quay.io/ceph/ceph:v19, name=lucid_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 09:44:21 compute-0 podman[80284]: 2025-12-05 09:44:21.428858722 +0000 UTC m=+0.159218641 container attach a624dc262bdf6107e4b2668b04afca8edc34ba71702f417a2c3e15674e5d565e (image=quay.io/ceph/ceph:v19, name=lucid_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 05 09:44:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4162805436' entity='client.admin' 
Dec 05 09:44:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:44:21 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:44:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:44:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:44:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:21 compute-0 systemd[1]: libpod-a624dc262bdf6107e4b2668b04afca8edc34ba71702f417a2c3e15674e5d565e.scope: Deactivated successfully.
Dec 05 09:44:21 compute-0 podman[80284]: 2025-12-05 09:44:21.790429985 +0000 UTC m=+0.520789854 container died a624dc262bdf6107e4b2668b04afca8edc34ba71702f417a2c3e15674e5d565e (image=quay.io/ceph/ceph:v19, name=lucid_goldwasser, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 09:44:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8af25f7390e758556de39d6ecfca25885216e50047d45ec51579e01130eaa6b9-merged.mount: Deactivated successfully.
Dec 05 09:44:21 compute-0 podman[80284]: 2025-12-05 09:44:21.833145055 +0000 UTC m=+0.563504924 container remove a624dc262bdf6107e4b2668b04afca8edc34ba71702f417a2c3e15674e5d565e (image=quay.io/ceph/ceph:v19, name=lucid_goldwasser, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:21 compute-0 systemd[1]: libpod-conmon-a624dc262bdf6107e4b2668b04afca8edc34ba71702f417a2c3e15674e5d565e.scope: Deactivated successfully.
Dec 05 09:44:21 compute-0 sudo[80325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:44:21 compute-0 sudo[80325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:21 compute-0 sudo[80256]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:21 compute-0 sudo[80325]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:22 compute-0 sudo[80383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyfehardvvzilhjmvdzznzhxumrkamzp ; /usr/bin/python3'
Dec 05 09:44:22 compute-0 sudo[80383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:22 compute-0 python3[80385]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:22 compute-0 podman[80386]: 2025-12-05 09:44:22.232811661 +0000 UTC m=+0.022504977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:22 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:22 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:22 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:22 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:22 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:44:22 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:22 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4162805436' entity='client.admin' 
Dec 05 09:44:22 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:22 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:44:22 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:22 compute-0 podman[80386]: 2025-12-05 09:44:22.43944931 +0000 UTC m=+0.229142576 container create 0176f64b1d735a1ddfdcfce73ad1f9f4ab3d6d96352a80377b1e736fc8a85e89 (image=quay.io/ceph/ceph:v19, name=tender_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:44:22 compute-0 systemd[1]: Started libpod-conmon-0176f64b1d735a1ddfdcfce73ad1f9f4ab3d6d96352a80377b1e736fc8a85e89.scope.
Dec 05 09:44:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd520b25eb917bacd20dad34453d39e028a0650a31842f31fe18773db5e47a4d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd520b25eb917bacd20dad34453d39e028a0650a31842f31fe18773db5e47a4d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd520b25eb917bacd20dad34453d39e028a0650a31842f31fe18773db5e47a4d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:22 compute-0 podman[80386]: 2025-12-05 09:44:22.521353484 +0000 UTC m=+0.311046750 container init 0176f64b1d735a1ddfdcfce73ad1f9f4ab3d6d96352a80377b1e736fc8a85e89 (image=quay.io/ceph/ceph:v19, name=tender_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:44:22 compute-0 podman[80386]: 2025-12-05 09:44:22.527675256 +0000 UTC m=+0.317368522 container start 0176f64b1d735a1ddfdcfce73ad1f9f4ab3d6d96352a80377b1e736fc8a85e89 (image=quay.io/ceph/ceph:v19, name=tender_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 09:44:22 compute-0 podman[80386]: 2025-12-05 09:44:22.530497784 +0000 UTC m=+0.320191070 container attach 0176f64b1d735a1ddfdcfce73ad1f9f4ab3d6d96352a80377b1e736fc8a85e89 (image=quay.io/ceph/ceph:v19, name=tender_elbakyan, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:44:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 05 09:44:22 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/759913400' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 05 09:44:23 compute-0 ceph-mon[74418]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:23 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/759913400' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 05 09:44:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 05 09:44:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 09:44:23 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/759913400' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 05 09:44:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 05 09:44:23 compute-0 tender_elbakyan[80401]: set require_min_compat_client to mimic
Dec 05 09:44:23 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 05 09:44:23 compute-0 systemd[1]: libpod-0176f64b1d735a1ddfdcfce73ad1f9f4ab3d6d96352a80377b1e736fc8a85e89.scope: Deactivated successfully.
Dec 05 09:44:23 compute-0 conmon[80401]: conmon 0176f64b1d735a1ddfdc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0176f64b1d735a1ddfdcfce73ad1f9f4ab3d6d96352a80377b1e736fc8a85e89.scope/container/memory.events
Dec 05 09:44:23 compute-0 podman[80386]: 2025-12-05 09:44:23.830382094 +0000 UTC m=+1.620075400 container died 0176f64b1d735a1ddfdcfce73ad1f9f4ab3d6d96352a80377b1e736fc8a85e89 (image=quay.io/ceph/ceph:v19, name=tender_elbakyan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 09:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd520b25eb917bacd20dad34453d39e028a0650a31842f31fe18773db5e47a4d-merged.mount: Deactivated successfully.
Dec 05 09:44:23 compute-0 podman[80386]: 2025-12-05 09:44:23.881516734 +0000 UTC m=+1.671210030 container remove 0176f64b1d735a1ddfdcfce73ad1f9f4ab3d6d96352a80377b1e736fc8a85e89 (image=quay.io/ceph/ceph:v19, name=tender_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:44:23 compute-0 systemd[1]: libpod-conmon-0176f64b1d735a1ddfdcfce73ad1f9f4ab3d6d96352a80377b1e736fc8a85e89.scope: Deactivated successfully.
Dec 05 09:44:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:23 compute-0 sudo[80383]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:24 compute-0 sudo[80461]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjayjuvznxmnrqxtgrarzjzyeooctszs ; /usr/bin/python3'
Dec 05 09:44:24 compute-0 sudo[80461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:24 compute-0 python3[80463]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:24 compute-0 podman[80464]: 2025-12-05 09:44:24.635173136 +0000 UTC m=+0.097793730 container create d024d93e0e95ccc06fbab2dc3aaf7048b156cf3ee32a4a9263837ab4ebdf0588 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:44:24 compute-0 podman[80464]: 2025-12-05 09:44:24.616103554 +0000 UTC m=+0.078724168 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:24 compute-0 systemd[1]: Started libpod-conmon-d024d93e0e95ccc06fbab2dc3aaf7048b156cf3ee32a4a9263837ab4ebdf0588.scope.
Dec 05 09:44:24 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba77f359579c9aeb5d174abed9e09d75c9a7637c37bd905f32bff78a7c4f1316/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba77f359579c9aeb5d174abed9e09d75c9a7637c37bd905f32bff78a7c4f1316/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba77f359579c9aeb5d174abed9e09d75c9a7637c37bd905f32bff78a7c4f1316/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:24 compute-0 podman[80464]: 2025-12-05 09:44:24.726914768 +0000 UTC m=+0.189535453 container init d024d93e0e95ccc06fbab2dc3aaf7048b156cf3ee32a4a9263837ab4ebdf0588 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:44:24 compute-0 podman[80464]: 2025-12-05 09:44:24.733442087 +0000 UTC m=+0.196062681 container start d024d93e0e95ccc06fbab2dc3aaf7048b156cf3ee32a4a9263837ab4ebdf0588 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 09:44:24 compute-0 podman[80464]: 2025-12-05 09:44:24.740127161 +0000 UTC m=+0.202747795 container attach d024d93e0e95ccc06fbab2dc3aaf7048b156cf3ee32a4a9263837ab4ebdf0588 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 09:44:24 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/759913400' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 05 09:44:24 compute-0 ceph-mon[74418]: osdmap e3: 0 total, 0 up, 0 in
Dec 05 09:44:24 compute-0 ceph-mon[74418]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:25 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14170 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:25 compute-0 sudo[80504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:25 compute-0 sudo[80504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:25 compute-0 sudo[80504]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:25 compute-0 sudo[80529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 05 09:44:25 compute-0 sudo[80529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:25 compute-0 sudo[80529]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:44:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:44:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:44:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:44:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:25 compute-0 ceph-mgr[74711]: [cephadm INFO root] Added host compute-0
Dec 05 09:44:25 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 05 09:44:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:44:25 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:44:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:44:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:44:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:25 compute-0 sudo[80573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:44:25 compute-0 sudo[80573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:44:25 compute-0 sudo[80573]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:25 compute-0 ceph-mon[74418]: from='client.14170 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:44:25 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:25 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:25 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:25 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:25 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:44:25 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:44:25 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:25 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:44:25 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:44:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:44:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:44:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:44:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:44:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:26 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Dec 05 09:44:26 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Dec 05 09:44:26 compute-0 ceph-mon[74418]: Added host compute-0
Dec 05 09:44:26 compute-0 ceph-mon[74418]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:27 compute-0 ceph-mon[74418]: Deploying cephadm binary to compute-1
Dec 05 09:44:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:28 compute-0 ceph-mon[74418]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:44:30 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:30 compute-0 ceph-mgr[74711]: [cephadm INFO root] Added host compute-1
Dec 05 09:44:30 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Added host compute-1
Dec 05 09:44:30 compute-0 ceph-mon[74418]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:30 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:44:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:44:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:32 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Dec 05 09:44:32 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Dec 05 09:44:32 compute-0 ceph-mon[74418]: Added host compute-1
Dec 05 09:44:32 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:32 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:44:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:33 compute-0 ceph-mon[74418]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:33 compute-0 ceph-mon[74418]: Deploying cephadm binary to compute-2
Dec 05 09:44:33 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:34 compute-0 ceph-mon[74418]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:44:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: [cephadm INFO root] Added host compute-2
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Added host compute-2
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 05 09:44:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 05 09:44:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 05 09:44:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 05 09:44:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 05 09:44:36 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 05 09:44:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 05 09:44:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:36 compute-0 wonderful_wescoff[80480]: Added host 'compute-0' with addr '192.168.122.100'
Dec 05 09:44:36 compute-0 wonderful_wescoff[80480]: Added host 'compute-1' with addr '192.168.122.101'
Dec 05 09:44:36 compute-0 wonderful_wescoff[80480]: Added host 'compute-2' with addr '192.168.122.102'
Dec 05 09:44:36 compute-0 wonderful_wescoff[80480]: Scheduled mon update...
Dec 05 09:44:36 compute-0 wonderful_wescoff[80480]: Scheduled mgr update...
Dec 05 09:44:36 compute-0 wonderful_wescoff[80480]: Scheduled osd.default_drive_group update...
Dec 05 09:44:36 compute-0 systemd[1]: libpod-d024d93e0e95ccc06fbab2dc3aaf7048b156cf3ee32a4a9263837ab4ebdf0588.scope: Deactivated successfully.
Dec 05 09:44:36 compute-0 podman[80464]: 2025-12-05 09:44:36.883198926 +0000 UTC m=+12.345819520 container died d024d93e0e95ccc06fbab2dc3aaf7048b156cf3ee32a4a9263837ab4ebdf0588 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 09:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba77f359579c9aeb5d174abed9e09d75c9a7637c37bd905f32bff78a7c4f1316-merged.mount: Deactivated successfully.
Dec 05 09:44:36 compute-0 podman[80464]: 2025-12-05 09:44:36.915979629 +0000 UTC m=+12.378600223 container remove d024d93e0e95ccc06fbab2dc3aaf7048b156cf3ee32a4a9263837ab4ebdf0588 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:44:36 compute-0 systemd[1]: libpod-conmon-d024d93e0e95ccc06fbab2dc3aaf7048b156cf3ee32a4a9263837ab4ebdf0588.scope: Deactivated successfully.
Dec 05 09:44:36 compute-0 sudo[80461]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:37 compute-0 ceph-mon[74418]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:37 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:37 compute-0 ceph-mon[74418]: Added host compute-2
Dec 05 09:44:37 compute-0 ceph-mon[74418]: Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 05 09:44:37 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:37 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:37 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:44:37 compute-0 sudo[80633]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pamkpzvexxpgcebupjbliojjzyxstghu ; /usr/bin/python3'
Dec 05 09:44:37 compute-0 sudo[80633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:37 compute-0 python3[80635]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:37 compute-0 podman[80637]: 2025-12-05 09:44:37.51006613 +0000 UTC m=+0.041878070 container create 591cd6bf330554cd11d438489bc3d0e2f3d6d632fd42a7aad3f2c6766574deaa (image=quay.io/ceph/ceph:v19, name=cranky_panini, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:44:37 compute-0 systemd[1]: Started libpod-conmon-591cd6bf330554cd11d438489bc3d0e2f3d6d632fd42a7aad3f2c6766574deaa.scope.
Dec 05 09:44:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e8d59be9e8c2089072b394efeab46405246629d78f87dfa3b25807a19d7579/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e8d59be9e8c2089072b394efeab46405246629d78f87dfa3b25807a19d7579/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e8d59be9e8c2089072b394efeab46405246629d78f87dfa3b25807a19d7579/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:44:37 compute-0 podman[80637]: 2025-12-05 09:44:37.577748214 +0000 UTC m=+0.109560204 container init 591cd6bf330554cd11d438489bc3d0e2f3d6d632fd42a7aad3f2c6766574deaa (image=quay.io/ceph/ceph:v19, name=cranky_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:44:37 compute-0 podman[80637]: 2025-12-05 09:44:37.58760787 +0000 UTC m=+0.119419810 container start 591cd6bf330554cd11d438489bc3d0e2f3d6d632fd42a7aad3f2c6766574deaa (image=quay.io/ceph/ceph:v19, name=cranky_panini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:44:37 compute-0 podman[80637]: 2025-12-05 09:44:37.494067058 +0000 UTC m=+0.025879018 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:44:37 compute-0 podman[80637]: 2025-12-05 09:44:37.591487612 +0000 UTC m=+0.123299572 container attach 591cd6bf330554cd11d438489bc3d0e2f3d6d632fd42a7aad3f2c6766574deaa (image=quay.io/ceph/ceph:v19, name=cranky_panini, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:44:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 05 09:44:38 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1748591770' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:44:38 compute-0 cranky_panini[80653]: 
Dec 05 09:44:38 compute-0 cranky_panini[80653]: {"fsid":"3c63ce0f-5206-59ae-8381-b67d0b6424b5","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":71,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-05T09:43:21:401410+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-05T09:43:21.404052+0000","services":{}},"progress_events":{}}
Dec 05 09:44:38 compute-0 systemd[1]: libpod-591cd6bf330554cd11d438489bc3d0e2f3d6d632fd42a7aad3f2c6766574deaa.scope: Deactivated successfully.
Dec 05 09:44:38 compute-0 podman[80678]: 2025-12-05 09:44:38.078428158 +0000 UTC m=+0.022585800 container died 591cd6bf330554cd11d438489bc3d0e2f3d6d632fd42a7aad3f2c6766574deaa (image=quay.io/ceph/ceph:v19, name=cranky_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-78e8d59be9e8c2089072b394efeab46405246629d78f87dfa3b25807a19d7579-merged.mount: Deactivated successfully.
Dec 05 09:44:38 compute-0 podman[80678]: 2025-12-05 09:44:38.113000252 +0000 UTC m=+0.057157874 container remove 591cd6bf330554cd11d438489bc3d0e2f3d6d632fd42a7aad3f2c6766574deaa (image=quay.io/ceph/ceph:v19, name=cranky_panini, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 09:44:38 compute-0 systemd[1]: libpod-conmon-591cd6bf330554cd11d438489bc3d0e2f3d6d632fd42a7aad3f2c6766574deaa.scope: Deactivated successfully.
Dec 05 09:44:38 compute-0 sudo[80633]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:38 compute-0 ceph-mon[74418]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 05 09:44:38 compute-0 ceph-mon[74418]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 05 09:44:38 compute-0 ceph-mon[74418]: Marking host: compute-1 for OSDSpec preview refresh.
Dec 05 09:44:38 compute-0 ceph-mon[74418]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 05 09:44:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1748591770' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:44:39 compute-0 ceph-mon[74418]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:41 compute-0 ceph-mon[74418]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:42 compute-0 ceph-mon[74418]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:44 compute-0 ceph-mon[74418]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:46 compute-0 ceph-mon[74418]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:48 compute-0 ceph-mon[74418]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:50 compute-0 ceph-mon[74418]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:52 compute-0 ceph-mon[74418]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:55 compute-0 ceph-mon[74418]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:55 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:44:55
Dec 05 09:44:55 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:44:55 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:44:55 compute-0 ceph-mgr[74711]: [balancer INFO root] No pools available
Dec 05 09:44:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:44:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:44:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:44:55 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:44:55 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:44:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:44:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:44:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:44:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:44:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:44:57 compute-0 ceph-mon[74418]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:59 compute-0 ceph-mon[74418]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:44:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:01 compute-0 ceph-mon[74418]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:03 compute-0 ceph-mon[74418]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:05 compute-0 ceph-mon[74418]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:07 compute-0 ceph-mon[74418]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:08 compute-0 sudo[80716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlqlrrtdjhiqggsgbjpnegryieswujld ; /usr/bin/python3'
Dec 05 09:45:08 compute-0 sudo[80716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:08 compute-0 python3[80718]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:45:08 compute-0 podman[80720]: 2025-12-05 09:45:08.536700271 +0000 UTC m=+0.025500650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:45:08 compute-0 podman[80720]: 2025-12-05 09:45:08.639577466 +0000 UTC m=+0.128377835 container create 3e504f9357a32ce0e9bd68d6a6af7420b4c766d827b20d35eec8c6a387a1d53f (image=quay.io/ceph/ceph:v19, name=exciting_saha, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:45:08 compute-0 systemd[1]: Started libpod-conmon-3e504f9357a32ce0e9bd68d6a6af7420b4c766d827b20d35eec8c6a387a1d53f.scope.
Dec 05 09:45:08 compute-0 ceph-mon[74418]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e61a1903f7d124ba913f3466bca694c62973dbee534e056376f54a7dadd31f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e61a1903f7d124ba913f3466bca694c62973dbee534e056376f54a7dadd31f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e61a1903f7d124ba913f3466bca694c62973dbee534e056376f54a7dadd31f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:08 compute-0 podman[80720]: 2025-12-05 09:45:08.709433302 +0000 UTC m=+0.198233671 container init 3e504f9357a32ce0e9bd68d6a6af7420b4c766d827b20d35eec8c6a387a1d53f (image=quay.io/ceph/ceph:v19, name=exciting_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:45:08 compute-0 podman[80720]: 2025-12-05 09:45:08.714539713 +0000 UTC m=+0.203340072 container start 3e504f9357a32ce0e9bd68d6a6af7420b4c766d827b20d35eec8c6a387a1d53f (image=quay.io/ceph/ceph:v19, name=exciting_saha, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:45:08 compute-0 podman[80720]: 2025-12-05 09:45:08.832665661 +0000 UTC m=+0.321466020 container attach 3e504f9357a32ce0e9bd68d6a6af7420b4c766d827b20d35eec8c6a387a1d53f (image=quay.io/ceph/ceph:v19, name=exciting_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 05 09:45:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 05 09:45:09 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330652569' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:45:09 compute-0 exciting_saha[80736]: 
Dec 05 09:45:09 compute-0 exciting_saha[80736]: {"fsid":"3c63ce0f-5206-59ae-8381-b67d0b6424b5","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":103,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-05T09:43:21:401410+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-05T09:44:57.902053+0000","services":{}},"progress_events":{}}
Dec 05 09:45:09 compute-0 systemd[1]: libpod-3e504f9357a32ce0e9bd68d6a6af7420b4c766d827b20d35eec8c6a387a1d53f.scope: Deactivated successfully.
Dec 05 09:45:09 compute-0 podman[80720]: 2025-12-05 09:45:09.168167094 +0000 UTC m=+0.656967473 container died 3e504f9357a32ce0e9bd68d6a6af7420b4c766d827b20d35eec8c6a387a1d53f (image=quay.io/ceph/ceph:v19, name=exciting_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:45:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3330652569' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-94e61a1903f7d124ba913f3466bca694c62973dbee534e056376f54a7dadd31f-merged.mount: Deactivated successfully.
Dec 05 09:45:10 compute-0 podman[80720]: 2025-12-05 09:45:10.144029932 +0000 UTC m=+1.632830291 container remove 3e504f9357a32ce0e9bd68d6a6af7420b4c766d827b20d35eec8c6a387a1d53f (image=quay.io/ceph/ceph:v19, name=exciting_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 09:45:10 compute-0 sudo[80716]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:10 compute-0 systemd[1]: libpod-conmon-3e504f9357a32ce0e9bd68d6a6af7420b4c766d827b20d35eec8c6a387a1d53f.scope: Deactivated successfully.
Dec 05 09:45:11 compute-0 ceph-mon[74418]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:45:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:45:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:45:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:45:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 05 09:45:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:45:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:45:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:45:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:45:12 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:45:12 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:45:12 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:45:12 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:45:13 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:45:13 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:45:13 compute-0 ceph-mon[74418]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:45:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:45:13 compute-0 ceph-mon[74418]: Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:45:13 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:45:13 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:45:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:45:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:45:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:45:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:14 compute-0 ceph-mgr[74711]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 05 09:45:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:45:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 09:45:14 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 05 09:45:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:14 compute-0 ceph-mgr[74711]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 05 09:45:14 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 05 09:45:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:14 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 38ad91e0-2912-4290-995f-c7ff0bb6a781 (Updating crash deployment (+1 -> 2))
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:45:14.303+0000 7f9e49ef8640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: service_name: mon
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: placement:
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   hosts:
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   - compute-0
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   - compute-1
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   - compute-2
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:45:14.304+0000 7f9e49ef8640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: service_name: mgr
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: placement:
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   hosts:
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   - compute-0
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   - compute-1
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   - compute-2
Dec 05 09:45:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 05 09:45:14 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 05 09:45:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 05 09:45:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:45:14 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:14 compute-0 ceph-mon[74418]: Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:45:14 compute-0 ceph-mon[74418]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:45:14 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:14 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:14 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:14 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 09:45:14 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Dec 05 09:45:14 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Dec 05 09:45:15 compute-0 ceph-mon[74418]: Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:45:15 compute-0 ceph-mon[74418]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:15 compute-0 ceph-mon[74418]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 05 09:45:15 compute-0 ceph-mon[74418]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:15 compute-0 ceph-mon[74418]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 05 09:45:15 compute-0 ceph-mon[74418]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:15 compute-0 ceph-mon[74418]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 05 09:45:15 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 05 09:45:15 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:15 compute-0 ceph-mon[74418]: Deploying daemon crash.compute-1 on compute-1
Dec 05 09:45:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:16 compute-0 ceph-mon[74418]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:45:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:45:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 05 09:45:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:17 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 38ad91e0-2912-4290-995f-c7ff0bb6a781 (Updating crash deployment (+1 -> 2))
Dec 05 09:45:17 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 38ad91e0-2912-4290-995f-c7ff0bb6a781 (Updating crash deployment (+1 -> 2)) in 3 seconds
Dec 05 09:45:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 05 09:45:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:45:17 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:45:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:45:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:45:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:45:17 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:45:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:45:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:45:17 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:17 compute-0 sudo[80774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:45:17 compute-0 sudo[80774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:17 compute-0 sudo[80774]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:17 compute-0 sudo[80799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:45:17 compute-0 sudo[80799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:17 compute-0 podman[80862]: 2025-12-05 09:45:17.649497655 +0000 UTC m=+0.038648282 container create 0708bc3d04e4fe66de897e024c677413a1c8fb5c130ef80fe4175342dcc84e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:45:17 compute-0 systemd[1]: Started libpod-conmon-0708bc3d04e4fe66de897e024c677413a1c8fb5c130ef80fe4175342dcc84e07.scope.
Dec 05 09:45:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:17 compute-0 podman[80862]: 2025-12-05 09:45:17.717340864 +0000 UTC m=+0.106491541 container init 0708bc3d04e4fe66de897e024c677413a1c8fb5c130ef80fe4175342dcc84e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 09:45:17 compute-0 podman[80862]: 2025-12-05 09:45:17.633172896 +0000 UTC m=+0.022323543 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:17 compute-0 podman[80862]: 2025-12-05 09:45:17.728327716 +0000 UTC m=+0.117478343 container start 0708bc3d04e4fe66de897e024c677413a1c8fb5c130ef80fe4175342dcc84e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 05 09:45:17 compute-0 frosty_napier[80879]: 167 167
Dec 05 09:45:17 compute-0 systemd[1]: libpod-0708bc3d04e4fe66de897e024c677413a1c8fb5c130ef80fe4175342dcc84e07.scope: Deactivated successfully.
Dec 05 09:45:17 compute-0 podman[80862]: 2025-12-05 09:45:17.735965968 +0000 UTC m=+0.125116615 container attach 0708bc3d04e4fe66de897e024c677413a1c8fb5c130ef80fe4175342dcc84e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 09:45:17 compute-0 podman[80862]: 2025-12-05 09:45:17.736759677 +0000 UTC m=+0.125910314 container died 0708bc3d04e4fe66de897e024c677413a1c8fb5c130ef80fe4175342dcc84e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f362e8070e514355f413ab4c2743ad534ee13ce53b2e79b5153af931b8f89ff-merged.mount: Deactivated successfully.
Dec 05 09:45:17 compute-0 podman[80862]: 2025-12-05 09:45:17.838389391 +0000 UTC m=+0.227540018 container remove 0708bc3d04e4fe66de897e024c677413a1c8fb5c130ef80fe4175342dcc84e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:45:17 compute-0 systemd[1]: libpod-conmon-0708bc3d04e4fe66de897e024c677413a1c8fb5c130ef80fe4175342dcc84e07.scope: Deactivated successfully.
Dec 05 09:45:17 compute-0 podman[80903]: 2025-12-05 09:45:17.987477997 +0000 UTC m=+0.040904966 container create b1070c35272aa09b1d71536d9e604e2b30f5ab18a0a79958c1f73f03a7f18e0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jemison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:45:18 compute-0 systemd[1]: Started libpod-conmon-b1070c35272aa09b1d71536d9e604e2b30f5ab18a0a79958c1f73f03a7f18e0f.scope.
Dec 05 09:45:18 compute-0 podman[80903]: 2025-12-05 09:45:17.96953568 +0000 UTC m=+0.022962669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8faa556a6c746761fa2d0f6d07599891558865d69c7bf9ad9e8360d9614062d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8faa556a6c746761fa2d0f6d07599891558865d69c7bf9ad9e8360d9614062d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8faa556a6c746761fa2d0f6d07599891558865d69c7bf9ad9e8360d9614062d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8faa556a6c746761fa2d0f6d07599891558865d69c7bf9ad9e8360d9614062d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8faa556a6c746761fa2d0f6d07599891558865d69c7bf9ad9e8360d9614062d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:18 compute-0 podman[80903]: 2025-12-05 09:45:18.081559582 +0000 UTC m=+0.134986571 container init b1070c35272aa09b1d71536d9e604e2b30f5ab18a0a79958c1f73f03a7f18e0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jemison, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 09:45:18 compute-0 podman[80903]: 2025-12-05 09:45:18.090740091 +0000 UTC m=+0.144167060 container start b1070c35272aa09b1d71536d9e604e2b30f5ab18a0a79958c1f73f03a7f18e0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:45:18 compute-0 podman[80903]: 2025-12-05 09:45:18.094715075 +0000 UTC m=+0.148142054 container attach b1070c35272aa09b1d71536d9e604e2b30f5ab18a0a79958c1f73f03a7f18e0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:45:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:45:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:45:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:45:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:18 compute-0 sad_jemison[80919]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:45:18 compute-0 sad_jemison[80919]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 09:45:18 compute-0 sad_jemison[80919]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 09:45:18 compute-0 sad_jemison[80919]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f2cb7ff3-5059-40ee-ae0a-c37b437655e2
Dec 05 09:45:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4934b92f-7ae0-4280-a278-f4e97a05f37b"} v 0)
Dec 05 09:45:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/827888853' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4934b92f-7ae0-4280-a278-f4e97a05f37b"}]: dispatch
Dec 05 09:45:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 05 09:45:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 09:45:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2"} v 0)
Dec 05 09:45:19 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3802622554' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2"}]: dispatch
Dec 05 09:45:19 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/827888853' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4934b92f-7ae0-4280-a278-f4e97a05f37b"}]': finished
Dec 05 09:45:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 05 09:45:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 05 09:45:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 09:45:19 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 05 09:45:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:19 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:19 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:19 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3802622554' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2"}]': finished
Dec 05 09:45:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 05 09:45:19 compute-0 ceph-mon[74418]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:19 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/827888853' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4934b92f-7ae0-4280-a278-f4e97a05f37b"}]: dispatch
Dec 05 09:45:19 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3802622554' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2"}]: dispatch
Dec 05 09:45:19 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 05 09:45:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:19 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:19 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:19 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:19 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:19 compute-0 sad_jemison[80919]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 05 09:45:19 compute-0 sad_jemison[80919]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 05 09:45:19 compute-0 sad_jemison[80919]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 09:45:19 compute-0 lvm[80980]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:45:19 compute-0 lvm[80980]: VG ceph_vg0 finished
Dec 05 09:45:19 compute-0 sad_jemison[80919]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:19 compute-0 sad_jemison[80919]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 05 09:45:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 05 09:45:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045628176' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 05 09:45:21 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 2 completed events
Dec 05 09:45:21 compute-0 sad_jemison[80919]:  stderr: got monmap epoch 1
Dec 05 09:45:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 05 09:45:21 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/811192981' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 05 09:45:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:45:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/827888853' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4934b92f-7ae0-4280-a278-f4e97a05f37b"}]': finished
Dec 05 09:45:21 compute-0 ceph-mon[74418]: osdmap e4: 1 total, 0 up, 1 in
Dec 05 09:45:21 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3802622554' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2"}]': finished
Dec 05 09:45:21 compute-0 ceph-mon[74418]: osdmap e5: 2 total, 0 up, 2 in
Dec 05 09:45:21 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:21 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4045628176' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 05 09:45:21 compute-0 sad_jemison[80919]: --> Creating keyring file for osd.1
Dec 05 09:45:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:21 compute-0 sad_jemison[80919]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 05 09:45:21 compute-0 sad_jemison[80919]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 05 09:45:21 compute-0 sad_jemison[80919]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid f2cb7ff3-5059-40ee-ae0a-c37b437655e2 --setuser ceph --setgroup ceph
Dec 05 09:45:21 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 05 09:45:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:22 compute-0 ceph-mon[74418]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:22 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/811192981' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 05 09:45:22 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:22 compute-0 ceph-mon[74418]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 05 09:45:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:23 compute-0 ceph-mon[74418]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:24 compute-0 sad_jemison[80919]:  stderr: 2025-12-05T09:45:21.225+0000 7fcd6c8ba740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 05 09:45:24 compute-0 sad_jemison[80919]:  stderr: 2025-12-05T09:45:21.488+0000 7fcd6c8ba740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 05 09:45:24 compute-0 sad_jemison[80919]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 05 09:45:25 compute-0 sad_jemison[80919]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 09:45:25 compute-0 sad_jemison[80919]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 05 09:45:25 compute-0 sad_jemison[80919]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:25 compute-0 sad_jemison[80919]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:25 compute-0 sad_jemison[80919]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 09:45:25 compute-0 sad_jemison[80919]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 09:45:25 compute-0 sad_jemison[80919]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 05 09:45:25 compute-0 sad_jemison[80919]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 05 09:45:25 compute-0 ceph-mon[74418]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:25 compute-0 systemd[1]: libpod-b1070c35272aa09b1d71536d9e604e2b30f5ab18a0a79958c1f73f03a7f18e0f.scope: Deactivated successfully.
Dec 05 09:45:25 compute-0 systemd[1]: libpod-b1070c35272aa09b1d71536d9e604e2b30f5ab18a0a79958c1f73f03a7f18e0f.scope: Consumed 3.885s CPU time.
Dec 05 09:45:25 compute-0 podman[80903]: 2025-12-05 09:45:25.423616098 +0000 UTC m=+7.477043167 container died b1070c35272aa09b1d71536d9e604e2b30f5ab18a0a79958c1f73f03a7f18e0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 09:45:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8faa556a6c746761fa2d0f6d07599891558865d69c7bf9ad9e8360d9614062d-merged.mount: Deactivated successfully.
Dec 05 09:45:25 compute-0 podman[80903]: 2025-12-05 09:45:25.590803695 +0000 UTC m=+7.644230664 container remove b1070c35272aa09b1d71536d9e604e2b30f5ab18a0a79958c1f73f03a7f18e0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jemison, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:45:25 compute-0 systemd[1]: libpod-conmon-b1070c35272aa09b1d71536d9e604e2b30f5ab18a0a79958c1f73f03a7f18e0f.scope: Deactivated successfully.
Dec 05 09:45:25 compute-0 sudo[80799]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:25 compute-0 sudo[81913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:45:25 compute-0 sudo[81913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:25 compute-0 sudo[81913]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:25 compute-0 sudo[81938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:45:25 compute-0 sudo[81938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:25 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:45:25 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:45:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:45:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:45:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:45:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:45:26 compute-0 podman[82004]: 2025-12-05 09:45:26.137968018 +0000 UTC m=+0.036552333 container create c8539927d92a133fd27349a5f9f10ce532327e53ea9dae5465508fa54b5bb61a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jemison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 09:45:26 compute-0 systemd[1]: Started libpod-conmon-c8539927d92a133fd27349a5f9f10ce532327e53ea9dae5465508fa54b5bb61a.scope.
Dec 05 09:45:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:26 compute-0 podman[82004]: 2025-12-05 09:45:26.123189315 +0000 UTC m=+0.021773660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:26 compute-0 podman[82004]: 2025-12-05 09:45:26.221391027 +0000 UTC m=+0.119975372 container init c8539927d92a133fd27349a5f9f10ce532327e53ea9dae5465508fa54b5bb61a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:45:26 compute-0 podman[82004]: 2025-12-05 09:45:26.227881323 +0000 UTC m=+0.126465668 container start c8539927d92a133fd27349a5f9f10ce532327e53ea9dae5465508fa54b5bb61a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:45:26 compute-0 podman[82004]: 2025-12-05 09:45:26.231048828 +0000 UTC m=+0.129633163 container attach c8539927d92a133fd27349a5f9f10ce532327e53ea9dae5465508fa54b5bb61a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 09:45:26 compute-0 quirky_jemison[82018]: 167 167
Dec 05 09:45:26 compute-0 systemd[1]: libpod-c8539927d92a133fd27349a5f9f10ce532327e53ea9dae5465508fa54b5bb61a.scope: Deactivated successfully.
Dec 05 09:45:26 compute-0 podman[82004]: 2025-12-05 09:45:26.233403785 +0000 UTC m=+0.131988090 container died c8539927d92a133fd27349a5f9f10ce532327e53ea9dae5465508fa54b5bb61a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 09:45:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aee64e57bcf60a4fd590feee226aba81fb82627624e8f9d2be343158e08a22d-merged.mount: Deactivated successfully.
Dec 05 09:45:26 compute-0 podman[82004]: 2025-12-05 09:45:26.272016345 +0000 UTC m=+0.170600660 container remove c8539927d92a133fd27349a5f9f10ce532327e53ea9dae5465508fa54b5bb61a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jemison, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:45:26 compute-0 systemd[1]: libpod-conmon-c8539927d92a133fd27349a5f9f10ce532327e53ea9dae5465508fa54b5bb61a.scope: Deactivated successfully.
Dec 05 09:45:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:26 compute-0 podman[82042]: 2025-12-05 09:45:26.421919001 +0000 UTC m=+0.043897289 container create 44fd98378be227a4656fa6c544666032202a2a4f9ef744b2436bf98e61d5d3f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 09:45:26 compute-0 systemd[1]: Started libpod-conmon-44fd98378be227a4656fa6c544666032202a2a4f9ef744b2436bf98e61d5d3f1.scope.
Dec 05 09:45:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d8ce7f0a48bb69214091fc5480466f5351e41a7868b041d63bb3a95cbb6d13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d8ce7f0a48bb69214091fc5480466f5351e41a7868b041d63bb3a95cbb6d13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d8ce7f0a48bb69214091fc5480466f5351e41a7868b041d63bb3a95cbb6d13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d8ce7f0a48bb69214091fc5480466f5351e41a7868b041d63bb3a95cbb6d13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:26 compute-0 podman[82042]: 2025-12-05 09:45:26.491177683 +0000 UTC m=+0.113155991 container init 44fd98378be227a4656fa6c544666032202a2a4f9ef744b2436bf98e61d5d3f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_franklin, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:45:26 compute-0 podman[82042]: 2025-12-05 09:45:26.401689219 +0000 UTC m=+0.023667527 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:26 compute-0 podman[82042]: 2025-12-05 09:45:26.498860356 +0000 UTC m=+0.120838644 container start 44fd98378be227a4656fa6c544666032202a2a4f9ef744b2436bf98e61d5d3f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 05 09:45:26 compute-0 podman[82042]: 2025-12-05 09:45:26.503262251 +0000 UTC m=+0.125240539 container attach 44fd98378be227a4656fa6c544666032202a2a4f9ef744b2436bf98e61d5d3f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_franklin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:45:26 compute-0 quirky_franklin[82058]: {
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:     "1": [
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:         {
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             "devices": [
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "/dev/loop3"
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             ],
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             "lv_name": "ceph_lv0",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             "lv_size": "21470642176",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             "name": "ceph_lv0",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             "tags": {
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.cluster_name": "ceph",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.crush_device_class": "",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.encrypted": "0",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.osd_id": "1",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.type": "block",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.vdo": "0",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:                 "ceph.with_tpm": "0"
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             },
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             "type": "block",
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:             "vg_name": "ceph_vg0"
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:         }
Dec 05 09:45:26 compute-0 quirky_franklin[82058]:     ]
Dec 05 09:45:26 compute-0 quirky_franklin[82058]: }
Dec 05 09:45:26 compute-0 systemd[1]: libpod-44fd98378be227a4656fa6c544666032202a2a4f9ef744b2436bf98e61d5d3f1.scope: Deactivated successfully.
Dec 05 09:45:26 compute-0 podman[82042]: 2025-12-05 09:45:26.824576926 +0000 UTC m=+0.446555244 container died 44fd98378be227a4656fa6c544666032202a2a4f9ef744b2436bf98e61d5d3f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_franklin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:45:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2d8ce7f0a48bb69214091fc5480466f5351e41a7868b041d63bb3a95cbb6d13-merged.mount: Deactivated successfully.
Dec 05 09:45:26 compute-0 podman[82042]: 2025-12-05 09:45:26.865915661 +0000 UTC m=+0.487893949 container remove 44fd98378be227a4656fa6c544666032202a2a4f9ef744b2436bf98e61d5d3f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 09:45:26 compute-0 systemd[1]: libpod-conmon-44fd98378be227a4656fa6c544666032202a2a4f9ef744b2436bf98e61d5d3f1.scope: Deactivated successfully.
Dec 05 09:45:26 compute-0 sudo[81938]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 05 09:45:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 05 09:45:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:45:26 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:26 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 05 09:45:26 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 05 09:45:26 compute-0 sudo[82078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:45:26 compute-0 sudo[82078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:26 compute-0 sudo[82078]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:27 compute-0 sudo[82103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:45:27 compute-0 sudo[82103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 05 09:45:27 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 05 09:45:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:45:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:27 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Dec 05 09:45:27 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Dec 05 09:45:27 compute-0 ceph-mon[74418]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:27 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 05 09:45:27 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:27 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 05 09:45:27 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:45:27 compute-0 podman[82168]: 2025-12-05 09:45:27.430035498 +0000 UTC m=+0.046038009 container create 14c0669f6ab03b768eb8bb2baa67509d77057191babd35cb6ac44df69857b6ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 05 09:45:27 compute-0 systemd[1]: Started libpod-conmon-14c0669f6ab03b768eb8bb2baa67509d77057191babd35cb6ac44df69857b6ef.scope.
Dec 05 09:45:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:27 compute-0 podman[82168]: 2025-12-05 09:45:27.485955432 +0000 UTC m=+0.101958003 container init 14c0669f6ab03b768eb8bb2baa67509d77057191babd35cb6ac44df69857b6ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 09:45:27 compute-0 podman[82168]: 2025-12-05 09:45:27.492839866 +0000 UTC m=+0.108842367 container start 14c0669f6ab03b768eb8bb2baa67509d77057191babd35cb6ac44df69857b6ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:45:27 compute-0 podman[82168]: 2025-12-05 09:45:27.496295188 +0000 UTC m=+0.112297749 container attach 14c0669f6ab03b768eb8bb2baa67509d77057191babd35cb6ac44df69857b6ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bohr, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:45:27 compute-0 strange_bohr[82182]: 167 167
Dec 05 09:45:27 compute-0 systemd[1]: libpod-14c0669f6ab03b768eb8bb2baa67509d77057191babd35cb6ac44df69857b6ef.scope: Deactivated successfully.
Dec 05 09:45:27 compute-0 conmon[82182]: conmon 14c0669f6ab03b768eb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14c0669f6ab03b768eb8bb2baa67509d77057191babd35cb6ac44df69857b6ef.scope/container/memory.events
Dec 05 09:45:27 compute-0 podman[82168]: 2025-12-05 09:45:27.498372718 +0000 UTC m=+0.114375249 container died 14c0669f6ab03b768eb8bb2baa67509d77057191babd35cb6ac44df69857b6ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bohr, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 09:45:27 compute-0 podman[82168]: 2025-12-05 09:45:27.413548255 +0000 UTC m=+0.029550786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-587e26c12884b43df8e678827758d5bcd9ec1970f1aa881127059edabaa40ea5-merged.mount: Deactivated successfully.
Dec 05 09:45:27 compute-0 podman[82168]: 2025-12-05 09:45:27.535262978 +0000 UTC m=+0.151265489 container remove 14c0669f6ab03b768eb8bb2baa67509d77057191babd35cb6ac44df69857b6ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bohr, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:45:27 compute-0 systemd[1]: libpod-conmon-14c0669f6ab03b768eb8bb2baa67509d77057191babd35cb6ac44df69857b6ef.scope: Deactivated successfully.
Dec 05 09:45:27 compute-0 podman[82215]: 2025-12-05 09:45:27.77386706 +0000 UTC m=+0.040732673 container create 0636bde809deec4db13f8493216d1067197035ff371af6a1b8050c13a44a9365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:45:27 compute-0 systemd[1]: Started libpod-conmon-0636bde809deec4db13f8493216d1067197035ff371af6a1b8050c13a44a9365.scope.
Dec 05 09:45:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befb43db6e95a70574db90ce4be192c70c6c63fae786e0f8119cc84184112084/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befb43db6e95a70574db90ce4be192c70c6c63fae786e0f8119cc84184112084/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befb43db6e95a70574db90ce4be192c70c6c63fae786e0f8119cc84184112084/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befb43db6e95a70574db90ce4be192c70c6c63fae786e0f8119cc84184112084/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befb43db6e95a70574db90ce4be192c70c6c63fae786e0f8119cc84184112084/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:27 compute-0 podman[82215]: 2025-12-05 09:45:27.84054849 +0000 UTC m=+0.107414113 container init 0636bde809deec4db13f8493216d1067197035ff371af6a1b8050c13a44a9365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:45:27 compute-0 podman[82215]: 2025-12-05 09:45:27.756329561 +0000 UTC m=+0.023195164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:27 compute-0 podman[82215]: 2025-12-05 09:45:27.851497261 +0000 UTC m=+0.118362844 container start 0636bde809deec4db13f8493216d1067197035ff371af6a1b8050c13a44a9365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate-test, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 09:45:27 compute-0 podman[82215]: 2025-12-05 09:45:27.85519037 +0000 UTC m=+0.122055943 container attach 0636bde809deec4db13f8493216d1067197035ff371af6a1b8050c13a44a9365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate-test, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:45:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate-test[82231]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 05 09:45:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate-test[82231]:                             [--no-systemd] [--no-tmpfs]
Dec 05 09:45:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate-test[82231]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 05 09:45:28 compute-0 systemd[1]: libpod-0636bde809deec4db13f8493216d1067197035ff371af6a1b8050c13a44a9365.scope: Deactivated successfully.
Dec 05 09:45:28 compute-0 podman[82215]: 2025-12-05 09:45:28.041718289 +0000 UTC m=+0.308583872 container died 0636bde809deec4db13f8493216d1067197035ff371af6a1b8050c13a44a9365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:45:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-befb43db6e95a70574db90ce4be192c70c6c63fae786e0f8119cc84184112084-merged.mount: Deactivated successfully.
Dec 05 09:45:28 compute-0 podman[82215]: 2025-12-05 09:45:28.090491743 +0000 UTC m=+0.357357316 container remove 0636bde809deec4db13f8493216d1067197035ff371af6a1b8050c13a44a9365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:45:28 compute-0 systemd[1]: libpod-conmon-0636bde809deec4db13f8493216d1067197035ff371af6a1b8050c13a44a9365.scope: Deactivated successfully.
Dec 05 09:45:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:28 compute-0 ceph-mon[74418]: Deploying daemon osd.1 on compute-0
Dec 05 09:45:28 compute-0 ceph-mon[74418]: Deploying daemon osd.0 on compute-1
Dec 05 09:45:28 compute-0 systemd[1]: Reloading.
Dec 05 09:45:28 compute-0 systemd-rc-local-generator[82291]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:45:28 compute-0 systemd-sysv-generator[82296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:45:28 compute-0 systemd[1]: Reloading.
Dec 05 09:45:28 compute-0 systemd-rc-local-generator[82331]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:45:28 compute-0 systemd-sysv-generator[82336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:45:28 compute-0 systemd[1]: Starting Ceph osd.1 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:45:29 compute-0 podman[82392]: 2025-12-05 09:45:29.231151442 +0000 UTC m=+0.036675006 container create 7a1e2442deb441c1eda56831ef67b2f4ac29b2d221f5d24ca12391a5e0c46bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 09:45:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73606eaa552fce78d60745d4ab5554bb21fe35d4e5b06be1409f954823ef3af1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73606eaa552fce78d60745d4ab5554bb21fe35d4e5b06be1409f954823ef3af1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73606eaa552fce78d60745d4ab5554bb21fe35d4e5b06be1409f954823ef3af1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73606eaa552fce78d60745d4ab5554bb21fe35d4e5b06be1409f954823ef3af1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73606eaa552fce78d60745d4ab5554bb21fe35d4e5b06be1409f954823ef3af1/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:29 compute-0 podman[82392]: 2025-12-05 09:45:29.307360979 +0000 UTC m=+0.112884553 container init 7a1e2442deb441c1eda56831ef67b2f4ac29b2d221f5d24ca12391a5e0c46bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:45:29 compute-0 podman[82392]: 2025-12-05 09:45:29.215011947 +0000 UTC m=+0.020535541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:29 compute-0 podman[82392]: 2025-12-05 09:45:29.314778187 +0000 UTC m=+0.120301751 container start 7a1e2442deb441c1eda56831ef67b2f4ac29b2d221f5d24ca12391a5e0c46bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 09:45:29 compute-0 podman[82392]: 2025-12-05 09:45:29.319031308 +0000 UTC m=+0.124554872 container attach 7a1e2442deb441c1eda56831ef67b2f4ac29b2d221f5d24ca12391a5e0c46bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 05 09:45:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 09:45:29 compute-0 bash[82392]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 09:45:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 09:45:29 compute-0 bash[82392]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 09:45:30 compute-0 lvm[82489]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:45:30 compute-0 lvm[82489]: VG ceph_vg0 finished
Dec 05 09:45:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 05 09:45:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 09:45:30 compute-0 bash[82392]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 05 09:45:30 compute-0 bash[82392]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 09:45:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 09:45:30 compute-0 bash[82392]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 09:45:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 09:45:30 compute-0 bash[82392]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 09:45:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 05 09:45:30 compute-0 bash[82392]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 05 09:45:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:30 compute-0 bash[82392]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:30 compute-0 bash[82392]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 09:45:30 compute-0 bash[82392]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 09:45:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 09:45:30 compute-0 bash[82392]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 09:45:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate[82407]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 05 09:45:30 compute-0 bash[82392]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 05 09:45:30 compute-0 systemd[1]: libpod-7a1e2442deb441c1eda56831ef67b2f4ac29b2d221f5d24ca12391a5e0c46bcd.scope: Deactivated successfully.
Dec 05 09:45:30 compute-0 systemd[1]: libpod-7a1e2442deb441c1eda56831ef67b2f4ac29b2d221f5d24ca12391a5e0c46bcd.scope: Consumed 1.363s CPU time.
Dec 05 09:45:30 compute-0 podman[82392]: 2025-12-05 09:45:30.590079657 +0000 UTC m=+1.395603221 container died 7a1e2442deb441c1eda56831ef67b2f4ac29b2d221f5d24ca12391a5e0c46bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec 05 09:45:31 compute-0 ceph-mon[74418]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-73606eaa552fce78d60745d4ab5554bb21fe35d4e5b06be1409f954823ef3af1-merged.mount: Deactivated successfully.
Dec 05 09:45:33 compute-0 ceph-mon[74418]: pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:33 compute-0 ceph-mon[74418]: pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:33 compute-0 podman[82392]: 2025-12-05 09:45:33.65888981 +0000 UTC m=+4.464413414 container remove 7a1e2442deb441c1eda56831ef67b2f4ac29b2d221f5d24ca12391a5e0c46bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1-activate, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:45:33 compute-0 podman[82658]: 2025-12-05 09:45:33.883983699 +0000 UTC m=+0.060322740 container create 2e7da1a95f327d65ed4ac6bfd995b41bd084e2e361c3f751676879a7f55fe3c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 09:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9358b8043052ec66c86e04122b1103cdd9fb97e01b08286cc56768ac406ed562/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9358b8043052ec66c86e04122b1103cdd9fb97e01b08286cc56768ac406ed562/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9358b8043052ec66c86e04122b1103cdd9fb97e01b08286cc56768ac406ed562/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9358b8043052ec66c86e04122b1103cdd9fb97e01b08286cc56768ac406ed562/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9358b8043052ec66c86e04122b1103cdd9fb97e01b08286cc56768ac406ed562/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:33 compute-0 podman[82658]: 2025-12-05 09:45:33.933402197 +0000 UTC m=+0.109741268 container init 2e7da1a95f327d65ed4ac6bfd995b41bd084e2e361c3f751676879a7f55fe3c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:45:33 compute-0 podman[82658]: 2025-12-05 09:45:33.944074232 +0000 UTC m=+0.120413273 container start 2e7da1a95f327d65ed4ac6bfd995b41bd084e2e361c3f751676879a7f55fe3c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:45:33 compute-0 podman[82658]: 2025-12-05 09:45:33.848759849 +0000 UTC m=+0.025098910 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:33 compute-0 bash[82658]: 2e7da1a95f327d65ed4ac6bfd995b41bd084e2e361c3f751676879a7f55fe3c5
Dec 05 09:45:33 compute-0 systemd[1]: Started Ceph osd.1 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:45:33 compute-0 ceph-osd[82677]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 09:45:33 compute-0 ceph-osd[82677]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec 05 09:45:33 compute-0 ceph-osd[82677]: pidfile_write: ignore empty --pid-file
Dec 05 09:45:33 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:33 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:33 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:33 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:33 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:33 compute-0 sudo[82103]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:45:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:45:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:34 compute-0 sudo[82698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:45:34 compute-0 sudo[82698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:34 compute-0 sudo[82698]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:34 compute-0 sudo[82723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:45:34 compute-0 sudo[82723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253dc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253dc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253dc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253dc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253dc00 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:34 compute-0 ceph-osd[82677]: bdev(0x563f2253d800 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:34 compute-0 podman[82789]: 2025-12-05 09:45:34.952034096 +0000 UTC m=+0.098336286 container create 75c214bdbc0566a0443cc3d91158d9580f6f5fc76483a921c32d39f432c7936d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_liskov, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 09:45:34 compute-0 podman[82789]: 2025-12-05 09:45:34.875179842 +0000 UTC m=+0.021482052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:34 compute-0 systemd[1]: Started libpod-conmon-75c214bdbc0566a0443cc3d91158d9580f6f5fc76483a921c32d39f432c7936d.scope.
Dec 05 09:45:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:35 compute-0 podman[82789]: 2025-12-05 09:45:35.031925081 +0000 UTC m=+0.178227281 container init 75c214bdbc0566a0443cc3d91158d9580f6f5fc76483a921c32d39f432c7936d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_liskov, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:45:35 compute-0 podman[82789]: 2025-12-05 09:45:35.040751073 +0000 UTC m=+0.187053263 container start 75c214bdbc0566a0443cc3d91158d9580f6f5fc76483a921c32d39f432c7936d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:45:35 compute-0 podman[82789]: 2025-12-05 09:45:35.044335948 +0000 UTC m=+0.190638138 container attach 75c214bdbc0566a0443cc3d91158d9580f6f5fc76483a921c32d39f432c7936d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:45:35 compute-0 systemd[1]: libpod-75c214bdbc0566a0443cc3d91158d9580f6f5fc76483a921c32d39f432c7936d.scope: Deactivated successfully.
Dec 05 09:45:35 compute-0 vigorous_liskov[82806]: 167 167
Dec 05 09:45:35 compute-0 conmon[82806]: conmon 75c214bdbc0566a0443c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75c214bdbc0566a0443cc3d91158d9580f6f5fc76483a921c32d39f432c7936d.scope/container/memory.events
Dec 05 09:45:35 compute-0 podman[82789]: 2025-12-05 09:45:35.047124354 +0000 UTC m=+0.193426544 container died 75c214bdbc0566a0443cc3d91158d9580f6f5fc76483a921c32d39f432c7936d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_liskov, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 09:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-808d467f1c6e10707bdbe1e354d3a0575c51cb74409cf5ebc561109a4ad1a9d1-merged.mount: Deactivated successfully.
Dec 05 09:45:35 compute-0 podman[82789]: 2025-12-05 09:45:35.090328725 +0000 UTC m=+0.236630915 container remove 75c214bdbc0566a0443cc3d91158d9580f6f5fc76483a921c32d39f432c7936d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 09:45:35 compute-0 systemd[1]: libpod-conmon-75c214bdbc0566a0443cc3d91158d9580f6f5fc76483a921c32d39f432c7936d.scope: Deactivated successfully.
Dec 05 09:45:35 compute-0 ceph-osd[82677]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 05 09:45:35 compute-0 ceph-osd[82677]: load: jerasure load: lrc 
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:35 compute-0 podman[82836]: 2025-12-05 09:45:35.265547615 +0000 UTC m=+0.046091321 container create a2fc565d258a15a8677566457b8e3488d039f684b0f72784303d781fbce42d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:45:35 compute-0 systemd[1]: Started libpod-conmon-a2fc565d258a15a8677566457b8e3488d039f684b0f72784303d781fbce42d53.scope.
Dec 05 09:45:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f971f566dbd62eefa66a493caeda742c3e6af1eb577bfd7768dffd70a20050e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f971f566dbd62eefa66a493caeda742c3e6af1eb577bfd7768dffd70a20050e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f971f566dbd62eefa66a493caeda742c3e6af1eb577bfd7768dffd70a20050e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f971f566dbd62eefa66a493caeda742c3e6af1eb577bfd7768dffd70a20050e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:45:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:35 compute-0 podman[82836]: 2025-12-05 09:45:35.247639347 +0000 UTC m=+0.028183183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:35 compute-0 podman[82836]: 2025-12-05 09:45:35.344530629 +0000 UTC m=+0.125074355 container init a2fc565d258a15a8677566457b8e3488d039f684b0f72784303d781fbce42d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 09:45:35 compute-0 podman[82836]: 2025-12-05 09:45:35.35298698 +0000 UTC m=+0.133530686 container start a2fc565d258a15a8677566457b8e3488d039f684b0f72784303d781fbce42d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:45:35 compute-0 podman[82836]: 2025-12-05 09:45:35.357188361 +0000 UTC m=+0.137732097 container attach a2fc565d258a15a8677566457b8e3488d039f684b0f72784303d781fbce42d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_fermat, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:45:35 compute-0 ceph-mon[74418]: pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:35 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:35 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:35 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:35 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:35 compute-0 ceph-osd[82677]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 05 09:45:35 compute-0 ceph-osd[82677]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:35 compute-0 lvm[82939]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:45:35 compute-0 lvm[82939]: VG ceph_vg0 finished
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:35 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:36 compute-0 crazy_fermat[82852]: {}
Dec 05 09:45:36 compute-0 systemd[1]: libpod-a2fc565d258a15a8677566457b8e3488d039f684b0f72784303d781fbce42d53.scope: Deactivated successfully.
Dec 05 09:45:36 compute-0 podman[82836]: 2025-12-05 09:45:36.03999247 +0000 UTC m=+0.820536176 container died a2fc565d258a15a8677566457b8e3488d039f684b0f72784303d781fbce42d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:45:36 compute-0 systemd[1]: libpod-a2fc565d258a15a8677566457b8e3488d039f684b0f72784303d781fbce42d53.scope: Consumed 1.022s CPU time.
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d8c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d9000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d9000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d9000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d9000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount shared_bdev_used = 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: RocksDB version: 7.9.2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Git sha 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: DB SUMMARY
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: DB Session ID:  BJCE1Y8PT92KQ5RPTU1P
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: CURRENT file:  CURRENT
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                         Options.error_if_exists: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.create_if_missing: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                                     Options.env: 0x563f233a9dc0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                                Options.info_log: 0x563f233ad7a0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                              Options.statistics: (nil)
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.use_fsync: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                              Options.db_log_dir: 
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.write_buffer_manager: 0x563f234a2a00
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.unordered_write: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.row_cache: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                              Options.wal_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.two_write_queues: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.wal_compression: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.atomic_flush: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.max_background_jobs: 4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.max_background_compactions: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.max_subcompactions: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.max_open_files: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Compression algorithms supported:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kZSTD supported: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kXpressCompression supported: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kBZip2Compression supported: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kLZ4Compression supported: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kZlibCompression supported: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kSnappyCompression supported: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d29b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d29b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d29b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4356f0b2-734f-49d4-be38-2fccee642a28
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927936299008, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927936299280, "job": 1, "event": "recovery_finished"}
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: freelist init
Dec 05 09:45:36 compute-0 ceph-osd[82677]: freelist _read_cfg
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs umount
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d9000 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 09:45:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d9000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d9000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d9000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bdev(0x563f233d9000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluefs mount shared_bdev_used = 4718592
Dec 05 09:45:36 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: RocksDB version: 7.9.2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Git sha 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: DB SUMMARY
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: DB Session ID:  BJCE1Y8PT92KQ5RPTU1O
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: CURRENT file:  CURRENT
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                         Options.error_if_exists: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.create_if_missing: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                                     Options.env: 0x563f23546310
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                                Options.info_log: 0x563f233ad940
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                              Options.statistics: (nil)
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.use_fsync: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                              Options.db_log_dir: 
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.write_buffer_manager: 0x563f234a2a00
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.unordered_write: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.row_cache: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                              Options.wal_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.two_write_queues: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.wal_compression: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.atomic_flush: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.max_background_jobs: 4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.max_background_compactions: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.max_subcompactions: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.max_open_files: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Compression algorithms supported:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kZSTD supported: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kXpressCompression supported: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kBZip2Compression supported: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kLZ4Compression supported: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kZlibCompression supported: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         kSnappyCompression supported: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233ad680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233ad680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233ad680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233ad680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233ad680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233ad680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233ad680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d29b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d29b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:           Options.merge_operator: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f233adac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563f225d29b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.compression: LZ4
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.num_levels: 7
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4356f0b2-734f-49d4-be38-2fccee642a28
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927936543864, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 09:45:36 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 09:45:37 compute-0 ceph-osd[82677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927937384421, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927936, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4356f0b2-734f-49d4-be38-2fccee642a28", "db_session_id": "BJCE1Y8PT92KQ5RPTU1O", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:45:37 compute-0 ceph-osd[82677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927937389034, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927937, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4356f0b2-734f-49d4-be38-2fccee642a28", "db_session_id": "BJCE1Y8PT92KQ5RPTU1O", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:45:37 compute-0 ceph-osd[82677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927937398062, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927937, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4356f0b2-734f-49d4-be38-2fccee642a28", "db_session_id": "BJCE1Y8PT92KQ5RPTU1O", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:45:37 compute-0 ceph-osd[82677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764927937400785, "job": 1, "event": "recovery_finished"}
Dec 05 09:45:37 compute-0 ceph-osd[82677]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 05 09:45:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f971f566dbd62eefa66a493caeda742c3e6af1eb577bfd7768dffd70a20050e1-merged.mount: Deactivated successfully.
Dec 05 09:45:37 compute-0 podman[82836]: 2025-12-05 09:45:37.424520801 +0000 UTC m=+2.205064507 container remove a2fc565d258a15a8677566457b8e3488d039f684b0f72784303d781fbce42d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:45:37 compute-0 ceph-mon[74418]: pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563f23574000
Dec 05 09:45:37 compute-0 ceph-osd[82677]: rocksdb: DB pointer 0x563f23554000
Dec 05 09:45:37 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 09:45:37 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 05 09:45:37 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 05 09:45:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:45:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.8 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 05 09:45:37 compute-0 ceph-osd[82677]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 05 09:45:37 compute-0 ceph-osd[82677]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 05 09:45:37 compute-0 ceph-osd[82677]: _get_class not permitted to load lua
Dec 05 09:45:37 compute-0 ceph-osd[82677]: _get_class not permitted to load sdk
Dec 05 09:45:37 compute-0 ceph-osd[82677]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 05 09:45:37 compute-0 ceph-osd[82677]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 05 09:45:37 compute-0 ceph-osd[82677]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 05 09:45:37 compute-0 ceph-osd[82677]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 05 09:45:37 compute-0 ceph-osd[82677]: osd.1 0 load_pgs
Dec 05 09:45:37 compute-0 ceph-osd[82677]: osd.1 0 load_pgs opened 0 pgs
Dec 05 09:45:37 compute-0 ceph-osd[82677]: osd.1 0 log_to_monitors true
Dec 05 09:45:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1[82673]: 2025-12-05T09:45:37.437+0000 7f638b979740 -1 osd.1 0 log_to_monitors true
Dec 05 09:45:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 05 09:45:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3369399854,v1:192.168.122.100:6803/3369399854]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 05 09:45:37 compute-0 sudo[82723]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:45:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:45:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:37 compute-0 systemd[1]: libpod-conmon-a2fc565d258a15a8677566457b8e3488d039f684b0f72784303d781fbce42d53.scope: Deactivated successfully.
Dec 05 09:45:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 05 09:45:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/1285266768,v1:192.168.122.101:6801/1285266768]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 05 09:45:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:38 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 05 09:45:38 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 05 09:45:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 05 09:45:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 09:45:39 compute-0 ceph-mon[74418]: from='osd.1 [v2:192.168.122.100:6802/3369399854,v1:192.168.122.100:6803/3369399854]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 05 09:45:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:39 compute-0 ceph-mon[74418]: from='osd.0 [v2:192.168.122.101:6800/1285266768,v1:192.168.122.101:6801/1285266768]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 05 09:45:40 compute-0 sudo[83384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktwbquqychuysqslhjlibyqlkpsxrxem ; /usr/bin/python3'
Dec 05 09:45:40 compute-0 sudo[83384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:40 compute-0 python3[83386]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:45:40 compute-0 podman[83388]: 2025-12-05 09:45:40.537972165 +0000 UTC m=+0.028042472 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:45:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3369399854,v1:192.168.122.100:6803/3369399854]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 05 09:45:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/1285266768,v1:192.168.122.101:6801/1285266768]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 05 09:45:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Dec 05 09:45:40 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Dec 05 09:45:41 compute-0 podman[83388]: 2025-12-05 09:45:41.134389046 +0000 UTC m=+0.624459333 container create 38161d5f0e2c2cff50c2039b9de202bad83bb6fe1828cb8a074b5ce5f4b319e0 (image=quay.io/ceph/ceph:v19, name=optimistic_cannon, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 09:45:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 05 09:45:41 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3369399854,v1:192.168.122.100:6803/3369399854]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 05 09:45:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec 05 09:45:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Dec 05 09:45:41 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/1285266768,v1:192.168.122.101:6801/1285266768]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 05 09:45:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Dec 05 09:45:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:41 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:41 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:41 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:41 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:41 compute-0 ceph-mon[74418]: pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:41 compute-0 systemd[1]: Started libpod-conmon-38161d5f0e2c2cff50c2039b9de202bad83bb6fe1828cb8a074b5ce5f4b319e0.scope.
Dec 05 09:45:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129254b4baea973f61cfdec3da2f4a943c4e0e13f22371ab24c87871101afad3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129254b4baea973f61cfdec3da2f4a943c4e0e13f22371ab24c87871101afad3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129254b4baea973f61cfdec3da2f4a943c4e0e13f22371ab24c87871101afad3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 05 09:45:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 09:45:41 compute-0 podman[83388]: 2025-12-05 09:45:41.683570923 +0000 UTC m=+1.173641250 container init 38161d5f0e2c2cff50c2039b9de202bad83bb6fe1828cb8a074b5ce5f4b319e0 (image=quay.io/ceph/ceph:v19, name=optimistic_cannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:45:41 compute-0 podman[83388]: 2025-12-05 09:45:41.696640547 +0000 UTC m=+1.186710834 container start 38161d5f0e2c2cff50c2039b9de202bad83bb6fe1828cb8a074b5ce5f4b319e0 (image=quay.io/ceph/ceph:v19, name=optimistic_cannon, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 09:45:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3369399854,v1:192.168.122.100:6803/3369399854]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 05 09:45:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/1285266768,v1:192.168.122.101:6801/1285266768]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 05 09:45:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Dec 05 09:45:42 compute-0 ceph-osd[82677]: osd.1 0 done with init, starting boot process
Dec 05 09:45:42 compute-0 ceph-osd[82677]: osd.1 0 start_boot
Dec 05 09:45:42 compute-0 ceph-osd[82677]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 05 09:45:42 compute-0 ceph-osd[82677]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 05 09:45:42 compute-0 ceph-osd[82677]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 05 09:45:42 compute-0 ceph-osd[82677]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 05 09:45:42 compute-0 ceph-osd[82677]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 05 09:45:42 compute-0 podman[83388]: 2025-12-05 09:45:42.139675309 +0000 UTC m=+1.629745606 container attach 38161d5f0e2c2cff50c2039b9de202bad83bb6fe1828cb8a074b5ce5f4b319e0 (image=quay.io/ceph/ceph:v19, name=optimistic_cannon, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:45:42 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Dec 05 09:45:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 05 09:45:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947276593' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:45:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:42 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:42 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:42 compute-0 optimistic_cannon[83404]: 
Dec 05 09:45:42 compute-0 optimistic_cannon[83404]: {"fsid":"3c63ce0f-5206-59ae-8381-b67d0b6424b5","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":136,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":7,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1764927919,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-05T09:43:21:401410+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-05T09:44:57.902053+0000","services":{}},"progress_events":{}}
Dec 05 09:45:42 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:42 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:42 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:42 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:42 compute-0 systemd[1]: libpod-38161d5f0e2c2cff50c2039b9de202bad83bb6fe1828cb8a074b5ce5f4b319e0.scope: Deactivated successfully.
Dec 05 09:45:42 compute-0 conmon[83404]: conmon 38161d5f0e2c2cff50c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38161d5f0e2c2cff50c2039b9de202bad83bb6fe1828cb8a074b5ce5f4b319e0.scope/container/memory.events
Dec 05 09:45:42 compute-0 podman[83388]: 2025-12-05 09:45:42.428965901 +0000 UTC m=+1.919036198 container died 38161d5f0e2c2cff50c2039b9de202bad83bb6fe1828cb8a074b5ce5f4b319e0 (image=quay.io/ceph/ceph:v19, name=optimistic_cannon, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:45:42 compute-0 ceph-mon[74418]: purged_snaps scrub starts
Dec 05 09:45:42 compute-0 ceph-mon[74418]: purged_snaps scrub ok
Dec 05 09:45:42 compute-0 ceph-mon[74418]: purged_snaps scrub starts
Dec 05 09:45:42 compute-0 ceph-mon[74418]: purged_snaps scrub ok
Dec 05 09:45:42 compute-0 ceph-mon[74418]: pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:42 compute-0 ceph-mon[74418]: from='osd.1 [v2:192.168.122.100:6802/3369399854,v1:192.168.122.100:6803/3369399854]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 05 09:45:42 compute-0 ceph-mon[74418]: from='osd.0 [v2:192.168.122.101:6800/1285266768,v1:192.168.122.101:6801/1285266768]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 05 09:45:42 compute-0 ceph-mon[74418]: osdmap e6: 2 total, 0 up, 2 in
Dec 05 09:45:42 compute-0 ceph-mon[74418]: from='osd.1 [v2:192.168.122.100:6802/3369399854,v1:192.168.122.100:6803/3369399854]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 05 09:45:42 compute-0 ceph-mon[74418]: from='osd.0 [v2:192.168.122.101:6800/1285266768,v1:192.168.122.101:6801/1285266768]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 05 09:45:42 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:42 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:42 compute-0 ceph-mon[74418]: from='osd.1 [v2:192.168.122.100:6802/3369399854,v1:192.168.122.100:6803/3369399854]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 05 09:45:42 compute-0 ceph-mon[74418]: from='osd.0 [v2:192.168.122.101:6800/1285266768,v1:192.168.122.101:6801/1285266768]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 05 09:45:42 compute-0 ceph-mon[74418]: osdmap e7: 2 total, 0 up, 2 in
Dec 05 09:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-129254b4baea973f61cfdec3da2f4a943c4e0e13f22371ab24c87871101afad3-merged.mount: Deactivated successfully.
Dec 05 09:45:43 compute-0 podman[83388]: 2025-12-05 09:45:43.140863278 +0000 UTC m=+2.630933565 container remove 38161d5f0e2c2cff50c2039b9de202bad83bb6fe1828cb8a074b5ce5f4b319e0 (image=quay.io/ceph/ceph:v19, name=optimistic_cannon, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 09:45:43 compute-0 systemd[1]: libpod-conmon-38161d5f0e2c2cff50c2039b9de202bad83bb6fe1828cb8a074b5ce5f4b319e0.scope: Deactivated successfully.
Dec 05 09:45:43 compute-0 sudo[83384]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:43 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:43 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:43 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:43 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:44 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:44 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:44 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:44 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:44 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:44 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:45 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:45 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:45 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:45 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v55: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:46 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:46 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:46 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:46 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:47 compute-0 ceph-mon[74418]: pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2947276593' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:45:47 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:47 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:47 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:47 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:47 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:47 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:47 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:47 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v56: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:48 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:48 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:49 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:49 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:49 compute-0 ceph-mon[74418]: pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:49 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:49 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:49 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:49 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:49 compute-0 ceph-mon[74418]: pgmap v55: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:49 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:49 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:45:49 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:49 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:49 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:49 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:45:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:49 compute-0 sudo[83440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:45:49 compute-0 sudo[83440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:49 compute-0 sudo[83440]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:50 compute-0 sudo[83465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:45:50 compute-0 sudo[83465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:50 compute-0 sudo[83465]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:50 compute-0 sudo[83490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 09:45:50 compute-0 sudo[83490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v57: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:50 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:50 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:50 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:50 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:45:50 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:50 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:50 compute-0 ceph-mon[74418]: pgmap v56: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:50 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:50 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:50 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:50 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:50 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:50 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:51 compute-0 podman[83581]: 2025-12-05 09:45:51.037555738 +0000 UTC m=+0.208887915 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:45:51 compute-0 podman[83581]: 2025-12-05 09:45:51.150551843 +0000 UTC m=+0.321884020 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 09:45:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:45:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:45:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:45:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:51 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:51 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:51 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:51 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:51 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:51 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:51 compute-0 sudo[83490]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:45:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:45:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:51 compute-0 sudo[83669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:45:51 compute-0 sudo[83669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:51 compute-0 sudo[83669]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:51 compute-0 sudo[83694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:45:51 compute-0 sudo[83694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:51 compute-0 ceph-mon[74418]: pgmap v57: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:51 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:51 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:51 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:51 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:51 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:51 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:51 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:51 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:51 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:51 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:52 compute-0 sudo[83694]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v58: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:52 compute-0 sudo[83751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:45:52 compute-0 sudo[83751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:52 compute-0 sudo[83751]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:52 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:52 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:52 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:52 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:52 compute-0 sudo[83776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- inventory --format=json-pretty --filter-for-batch
Dec 05 09:45:52 compute-0 sudo[83776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:45:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:45:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:52 compute-0 ceph-mon[74418]: pgmap v58: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:52 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:52 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:52 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:53 compute-0 podman[83841]: 2025-12-05 09:45:53.032042735 +0000 UTC m=+0.168689886 container create 36d9f07f4f3c1acb0e97b325034086b6726ffbf5449270159317ecea50029fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jackson, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:45:53 compute-0 podman[83841]: 2025-12-05 09:45:52.986628392 +0000 UTC m=+0.123275563 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:53 compute-0 systemd[1]: Started libpod-conmon-36d9f07f4f3c1acb0e97b325034086b6726ffbf5449270159317ecea50029fbc.scope.
Dec 05 09:45:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:53 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:53 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:53 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:53 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:53 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:53 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:53 compute-0 podman[83841]: 2025-12-05 09:45:53.456004616 +0000 UTC m=+0.592651857 container init 36d9f07f4f3c1acb0e97b325034086b6726ffbf5449270159317ecea50029fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:45:53 compute-0 thirsty_jackson[83856]: 167 167
Dec 05 09:45:53 compute-0 systemd[1]: libpod-36d9f07f4f3c1acb0e97b325034086b6726ffbf5449270159317ecea50029fbc.scope: Deactivated successfully.
Dec 05 09:45:53 compute-0 podman[83841]: 2025-12-05 09:45:53.47370633 +0000 UTC m=+0.610353481 container start 36d9f07f4f3c1acb0e97b325034086b6726ffbf5449270159317ecea50029fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:45:53 compute-0 podman[83841]: 2025-12-05 09:45:53.588198066 +0000 UTC m=+0.724845267 container attach 36d9f07f4f3c1acb0e97b325034086b6726ffbf5449270159317ecea50029fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jackson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Dec 05 09:45:53 compute-0 podman[83841]: 2025-12-05 09:45:53.589573685 +0000 UTC m=+0.726220886 container died 36d9f07f4f3c1acb0e97b325034086b6726ffbf5449270159317ecea50029fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:45:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v59: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:54 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:54 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:54 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:54 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:54 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:54 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:54 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:54 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:55 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:55 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f80c76d11ef3bd8058be7e64041935092800b6f9aef8c162751f763e6d0aed01-merged.mount: Deactivated successfully.
Dec 05 09:45:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:55 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:55 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:55 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:45:55
Dec 05 09:45:55 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:45:55 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:45:55 compute-0 ceph-mgr[74711]: [balancer INFO root] No pools available
Dec 05 09:45:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:45:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:45:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:45:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:45:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:45:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:45:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:45:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:45:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:45:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v60: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:56 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:56 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:56 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:56 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:56 compute-0 ceph-mon[74418]: pgmap v59: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:56 compute-0 podman[83841]: 2025-12-05 09:45:56.688522574 +0000 UTC m=+3.825169745 container remove 36d9f07f4f3c1acb0e97b325034086b6726ffbf5449270159317ecea50029fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 09:45:56 compute-0 systemd[1]: libpod-conmon-36d9f07f4f3c1acb0e97b325034086b6726ffbf5449270159317ecea50029fbc.scope: Deactivated successfully.
Dec 05 09:45:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:45:56 compute-0 podman[83882]: 2025-12-05 09:45:56.812650809 +0000 UTC m=+0.020947423 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:45:56 compute-0 podman[83882]: 2025-12-05 09:45:56.924361359 +0000 UTC m=+0.132657973 container create e5eefa9ee219e143974185e6ab4d81877802ce9f50308478e7a37ef8e9bba6f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 05 09:45:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:57 compute-0 systemd[1]: Started libpod-conmon-e5eefa9ee219e143974185e6ab4d81877802ce9f50308478e7a37ef8e9bba6f0.scope.
Dec 05 09:45:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:45:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56dae0c6df83e66226e95c9f6cbdb1ed97512228356d92645470e08668f9cc7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56dae0c6df83e66226e95c9f6cbdb1ed97512228356d92645470e08668f9cc7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56dae0c6df83e66226e95c9f6cbdb1ed97512228356d92645470e08668f9cc7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56dae0c6df83e66226e95c9f6cbdb1ed97512228356d92645470e08668f9cc7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:57 compute-0 podman[83882]: 2025-12-05 09:45:57.29474352 +0000 UTC m=+0.503040134 container init e5eefa9ee219e143974185e6ab4d81877802ce9f50308478e7a37ef8e9bba6f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:45:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:45:57 compute-0 podman[83882]: 2025-12-05 09:45:57.307013911 +0000 UTC m=+0.515310505 container start e5eefa9ee219e143974185e6ab4d81877802ce9f50308478e7a37ef8e9bba6f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:45:57 compute-0 podman[83882]: 2025-12-05 09:45:57.380775494 +0000 UTC m=+0.589072098 container attach e5eefa9ee219e143974185e6ab4d81877802ce9f50308478e7a37ef8e9bba6f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Dec 05 09:45:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:45:57 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:57 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:57 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:57 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 05 09:45:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 05 09:45:57 compute-0 ceph-mgr[74711]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Dec 05 09:45:57 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Dec 05 09:45:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 05 09:45:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:57 compute-0 ceph-mon[74418]: pgmap v60: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:57 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:57 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:57 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:57 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:57 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:57 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:57 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:57 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:57 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 05 09:45:57 compute-0 ceph-mon[74418]: Adjusting osd_memory_target on compute-1 to  5247M
Dec 05 09:45:57 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:58 compute-0 focused_cori[83898]: [
Dec 05 09:45:58 compute-0 focused_cori[83898]:     {
Dec 05 09:45:58 compute-0 focused_cori[83898]:         "available": false,
Dec 05 09:45:58 compute-0 focused_cori[83898]:         "being_replaced": false,
Dec 05 09:45:58 compute-0 focused_cori[83898]:         "ceph_device_lvm": false,
Dec 05 09:45:58 compute-0 focused_cori[83898]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 09:45:58 compute-0 focused_cori[83898]:         "lsm_data": {},
Dec 05 09:45:58 compute-0 focused_cori[83898]:         "lvs": [],
Dec 05 09:45:58 compute-0 focused_cori[83898]:         "path": "/dev/sr0",
Dec 05 09:45:58 compute-0 focused_cori[83898]:         "rejected_reasons": [
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "Has a FileSystem",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "Insufficient space (<5GB)"
Dec 05 09:45:58 compute-0 focused_cori[83898]:         ],
Dec 05 09:45:58 compute-0 focused_cori[83898]:         "sys_api": {
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "actuators": null,
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "device_nodes": [
Dec 05 09:45:58 compute-0 focused_cori[83898]:                 "sr0"
Dec 05 09:45:58 compute-0 focused_cori[83898]:             ],
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "devname": "sr0",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "human_readable_size": "482.00 KB",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "id_bus": "ata",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "model": "QEMU DVD-ROM",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "nr_requests": "2",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "parent": "/dev/sr0",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "partitions": {},
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "path": "/dev/sr0",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "removable": "1",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "rev": "2.5+",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "ro": "0",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "rotational": "1",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "sas_address": "",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "sas_device_handle": "",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "scheduler_mode": "mq-deadline",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "sectors": 0,
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "sectorsize": "2048",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "size": 493568.0,
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "support_discard": "2048",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "type": "disk",
Dec 05 09:45:58 compute-0 focused_cori[83898]:             "vendor": "QEMU"
Dec 05 09:45:58 compute-0 focused_cori[83898]:         }
Dec 05 09:45:58 compute-0 focused_cori[83898]:     }
Dec 05 09:45:58 compute-0 focused_cori[83898]: ]
Dec 05 09:45:58 compute-0 systemd[1]: libpod-e5eefa9ee219e143974185e6ab4d81877802ce9f50308478e7a37ef8e9bba6f0.scope: Deactivated successfully.
Dec 05 09:45:58 compute-0 podman[83882]: 2025-12-05 09:45:58.119671432 +0000 UTC m=+1.327968056 container died e5eefa9ee219e143974185e6ab4d81877802ce9f50308478e7a37ef8e9bba6f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 09:45:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:45:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v61: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:58 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:58 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:58 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:58 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:59 compute-0 ceph-mon[74418]: pgmap v61: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:45:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-56dae0c6df83e66226e95c9f6cbdb1ed97512228356d92645470e08668f9cc7e-merged.mount: Deactivated successfully.
Dec 05 09:45:59 compute-0 podman[83882]: 2025-12-05 09:45:59.37197138 +0000 UTC m=+2.580267974 container remove e5eefa9ee219e143974185e6ab4d81877802ce9f50308478e7a37ef8e9bba6f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:45:59 compute-0 systemd[1]: libpod-conmon-e5eefa9ee219e143974185e6ab4d81877802ce9f50308478e7a37ef8e9bba6f0.scope: Deactivated successfully.
Dec 05 09:45:59 compute-0 sudo[83776]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:45:59 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:45:59 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:45:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:45:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:45:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:45:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:45:59 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:45:59 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:45:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:45:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:45:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:45:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:45:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 05 09:45:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:45:59 compute-0 ceph-mgr[74711]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Dec 05 09:45:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Dec 05 09:45:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 05 09:45:59 compute-0 ceph-mgr[74711]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 05 09:45:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 05 09:46:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:46:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v62: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:46:00 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:46:00 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:46:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:46:00 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:46:00 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:00 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:46:00 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:46:01 compute-0 ceph-mon[74418]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec 05 09:46:01 compute-0 ceph-mon[74418]: Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 05 09:46:01 compute-0 ceph-mon[74418]: pgmap v62: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:46:01 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:01 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:01 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:46:01 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:46:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:46:01 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:46:01 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:01 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:46:01 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:46:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v63: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:46:02 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:02 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:02 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:46:02 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:46:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:46:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:46:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:02 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:46:02 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:46:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:46:03 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:46:03 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:46:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:46:03 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:46:03 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:03 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:46:03 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:46:03 compute-0 ceph-mon[74418]: pgmap v63: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:46:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v64: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:46:04 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:46:04 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/1285266768; not ready for session (expect reconnect)
Dec 05 09:46:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:46:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:46:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:04 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:46:04 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 09:46:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 05 09:46:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 09:46:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Dec 05 09:46:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:04 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/1285266768,v1:192.168.122.101:6801/1285266768] boot
Dec 05 09:46:04 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Dec 05 09:46:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:46:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:46:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:04 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:46:05 compute-0 ceph-osd[82677]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 8.398 iops: 2149.830 elapsed_sec: 1.395
Dec 05 09:46:05 compute-0 ceph-osd[82677]: log_channel(cluster) log [WRN] : OSD bench result of 2149.830111 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 09:46:05 compute-0 ceph-osd[82677]: osd.1 0 waiting for initial osdmap
Dec 05 09:46:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1[82673]: 2025-12-05T09:46:05.223+0000 7f63878fc640 -1 osd.1 0 waiting for initial osdmap
Dec 05 09:46:05 compute-0 systemd[75697]: Starting Mark boot as successful...
Dec 05 09:46:05 compute-0 systemd[75697]: Finished Mark boot as successful.
Dec 05 09:46:05 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:46:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:46:05 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:05 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:46:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 05 09:46:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 09:46:06 compute-0 ceph-mgr[74711]: [devicehealth INFO root] creating mgr pool
Dec 05 09:46:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v66: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 09:46:06 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:46:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 05 09:46:06 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 05 09:46:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:46:06 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:06 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:46:06 compute-0 ceph-osd[82677]: osd.1 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 05 09:46:06 compute-0 ceph-osd[82677]: osd.1 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 05 09:46:06 compute-0 ceph-osd[82677]: osd.1 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 05 09:46:06 compute-0 ceph-osd[82677]: osd.1 8 check_osdmap_features require_osd_release unknown -> squid
Dec 05 09:46:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Dec 05 09:46:06 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Dec 05 09:46:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:46:06 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:06 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:46:06 compute-0 ceph-mon[74418]: OSD bench result of 2088.012600 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 09:46:06 compute-0 ceph-mon[74418]: pgmap v64: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 09:46:06 compute-0 ceph-mon[74418]: osd.0 [v2:192.168.122.101:6800/1285266768,v1:192.168.122.101:6801/1285266768] boot
Dec 05 09:46:06 compute-0 ceph-mon[74418]: osdmap e8: 2 total, 1 up, 2 in
Dec 05 09:46:06 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:46:06 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:06 compute-0 ceph-mon[74418]: OSD bench result of 2149.830111 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 09:46:06 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:06 compute-0 ceph-osd[82677]: osd.1 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 09:46:06 compute-0 ceph-osd[82677]: osd.1 8 set_numa_affinity not setting numa affinity
Dec 05 09:46:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-osd-1[82673]: 2025-12-05T09:46:06.990+0000 7f6382f24640 -1 osd.1 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 09:46:06 compute-0 ceph-osd[82677]: osd.1 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 05 09:46:06 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 05 09:46:07 compute-0 ceph-osd[82677]: osd.1 8 tick checking mon for new map
Dec 05 09:46:07 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3369399854; not ready for session (expect reconnect)
Dec 05 09:46:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:46:07 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:07 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 09:46:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 05 09:46:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 09:46:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 05 09:46:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Dec 05 09:46:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Dec 05 09:46:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 09:46:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 09:46:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 09:46:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/3369399854,v1:192.168.122.100:6803/3369399854] boot
Dec 05 09:46:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Dec 05 09:46:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:46:08 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 05 09:46:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 05 09:46:08 compute-0 ceph-osd[82677]: osd.1 10 state: booting -> active
Dec 05 09:46:08 compute-0 ceph-osd[82677]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 05 09:46:08 compute-0 ceph-osd[82677]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 05 09:46:08 compute-0 ceph-osd[82677]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 05 09:46:08 compute-0 ceph-mon[74418]: pgmap v66: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 09:46:08 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 05 09:46:08 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:08 compute-0 ceph-mon[74418]: osdmap e9: 2 total, 1 up, 2 in
Dec 05 09:46:08 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:08 compute-0 ceph-mon[74418]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 05 09:46:08 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:46:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v69: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 09:46:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 05 09:46:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 05 09:46:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Dec 05 09:46:09 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Dec 05 09:46:09 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 09:46:09 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 05 09:46:09 compute-0 ceph-mon[74418]: osd.1 [v2:192.168.122.100:6802/3369399854,v1:192.168.122.100:6803/3369399854] boot
Dec 05 09:46:09 compute-0 ceph-mon[74418]: osdmap e10: 2 total, 2 up, 2 in
Dec 05 09:46:09 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:46:09 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 05 09:46:09 compute-0 ceph-mon[74418]: pgmap v69: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 09:46:09 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 05 09:46:09 compute-0 ceph-mon[74418]: osdmap e11: 2 total, 2 up, 2 in
Dec 05 09:46:09 compute-0 ceph-mgr[74711]: [devicehealth INFO root] creating main.db for devicehealth
Dec 05 09:46:09 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Check health
Dec 05 09:46:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 05 09:46:09 compute-0 sudo[85089]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 05 09:46:09 compute-0 sudo[85089]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 09:46:09 compute-0 sudo[85089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 05 09:46:09 compute-0 sudo[85089]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 05 09:46:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 05 09:46:09 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:46:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 05 09:46:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Dec 05 09:46:10 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec 05 09:46:10 compute-0 ceph-mon[74418]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 09:46:10 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 05 09:46:10 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 05 09:46:10 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:46:10 compute-0 ceph-mon[74418]: osdmap e12: 2 total, 2 up, 2 in
Dec 05 09:46:10 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.hvnxai(active, since 2m)
Dec 05 09:46:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v72: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 05 09:46:11 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 05 09:46:11 compute-0 ceph-mon[74418]: mgrmap e9: compute-0.hvnxai(active, since 2m)
Dec 05 09:46:11 compute-0 ceph-mon[74418]: pgmap v72: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 05 09:46:12 compute-0 ceph-mon[74418]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 05 09:46:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:13 compute-0 ceph-mon[74418]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:46:13 compute-0 sudo[85115]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvbkjleqlllmhubdjxdwfiqzijsmjtxf ; /usr/bin/python3'
Dec 05 09:46:13 compute-0 sudo[85115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:13 compute-0 python3[85117]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:13 compute-0 podman[85119]: 2025-12-05 09:46:13.601456695 +0000 UTC m=+0.072925291 container create 829c250847add3d97e9260774755a9cccbeff6c281deefd4df7f85dcf85c0cd7 (image=quay.io/ceph/ceph:v19, name=musing_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:46:13 compute-0 systemd[1]: Started libpod-conmon-829c250847add3d97e9260774755a9cccbeff6c281deefd4df7f85dcf85c0cd7.scope.
Dec 05 09:46:13 compute-0 podman[85119]: 2025-12-05 09:46:13.576843261 +0000 UTC m=+0.048311877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ec44790748d74663c6b88147151e7c40329a62c7c7850f1527a18c33c08bc6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ec44790748d74663c6b88147151e7c40329a62c7c7850f1527a18c33c08bc6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ec44790748d74663c6b88147151e7c40329a62c7c7850f1527a18c33c08bc6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:13 compute-0 podman[85119]: 2025-12-05 09:46:13.699258998 +0000 UTC m=+0.170727624 container init 829c250847add3d97e9260774755a9cccbeff6c281deefd4df7f85dcf85c0cd7 (image=quay.io/ceph/ceph:v19, name=musing_brown, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 09:46:13 compute-0 podman[85119]: 2025-12-05 09:46:13.708225628 +0000 UTC m=+0.179694234 container start 829c250847add3d97e9260774755a9cccbeff6c281deefd4df7f85dcf85c0cd7 (image=quay.io/ceph/ceph:v19, name=musing_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 05 09:46:13 compute-0 podman[85119]: 2025-12-05 09:46:13.71403838 +0000 UTC m=+0.185506996 container attach 829c250847add3d97e9260774755a9cccbeff6c281deefd4df7f85dcf85c0cd7 (image=quay.io/ceph/ceph:v19, name=musing_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 09:46:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 05 09:46:14 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564011263' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:46:14 compute-0 musing_brown[85135]: 
Dec 05 09:46:14 compute-0 musing_brown[85135]: {"fsid":"3c63ce0f-5206-59ae-8381-b67d0b6424b5","health":{"status":"HEALTH_WARN","checks":{"BLUESTORE_SLOW_OP_ALERT":{"severity":"HEALTH_WARN","summary":{"message":"1 OSD(s) experiencing slow operations in BlueStore","count":1},"muted":false},"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":168,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":2,"osd_up_since":1764927967,"num_in_osds":2,"osd_in_since":1764927919,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475168768,"bytes_avail":42466115584,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-12-05T09:43:21:401410+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-05T09:44:57.902053+0000","services":{}},"progress_events":{}}
Dec 05 09:46:14 compute-0 systemd[1]: libpod-829c250847add3d97e9260774755a9cccbeff6c281deefd4df7f85dcf85c0cd7.scope: Deactivated successfully.
Dec 05 09:46:14 compute-0 podman[85119]: 2025-12-05 09:46:14.885733764 +0000 UTC m=+1.357202400 container died 829c250847add3d97e9260774755a9cccbeff6c281deefd4df7f85dcf85c0cd7 (image=quay.io/ceph/ceph:v19, name=musing_brown, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 09:46:14 compute-0 ceph-mon[74418]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2564011263' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-31ec44790748d74663c6b88147151e7c40329a62c7c7850f1527a18c33c08bc6-merged.mount: Deactivated successfully.
Dec 05 09:46:15 compute-0 podman[85119]: 2025-12-05 09:46:15.016322959 +0000 UTC m=+1.487791545 container remove 829c250847add3d97e9260774755a9cccbeff6c281deefd4df7f85dcf85c0cd7 (image=quay.io/ceph/ceph:v19, name=musing_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 09:46:15 compute-0 systemd[1]: libpod-conmon-829c250847add3d97e9260774755a9cccbeff6c281deefd4df7f85dcf85c0cd7.scope: Deactivated successfully.
Dec 05 09:46:15 compute-0 sudo[85115]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:15 compute-0 sudo[85194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfagkqmihkkvlwngrfoiqrrqndweyvvv ; /usr/bin/python3'
Dec 05 09:46:15 compute-0 sudo[85194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:15 compute-0 python3[85196]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:15 compute-0 podman[85197]: 2025-12-05 09:46:15.564893398 +0000 UTC m=+0.025419978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:15 compute-0 podman[85197]: 2025-12-05 09:46:15.987367399 +0000 UTC m=+0.447893949 container create 2f8cbd134ca226e0d72336d6381b3ee7df3093698b3f0ce5da7486792cc165f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Dec 05 09:46:16 compute-0 systemd[1]: Started libpod-conmon-2f8cbd134ca226e0d72336d6381b3ee7df3093698b3f0ce5da7486792cc165f4.scope.
Dec 05 09:46:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a361efc1502c80f2c20013aab55e4b63bbae4d2e7c524281d67b54df7b2342/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a361efc1502c80f2c20013aab55e4b63bbae4d2e7c524281d67b54df7b2342/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:16 compute-0 podman[85197]: 2025-12-05 09:46:16.059053634 +0000 UTC m=+0.519580205 container init 2f8cbd134ca226e0d72336d6381b3ee7df3093698b3f0ce5da7486792cc165f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:46:16 compute-0 podman[85197]: 2025-12-05 09:46:16.066387598 +0000 UTC m=+0.526914148 container start 2f8cbd134ca226e0d72336d6381b3ee7df3093698b3f0ce5da7486792cc165f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 09:46:16 compute-0 podman[85197]: 2025-12-05 09:46:16.069787744 +0000 UTC m=+0.530314294 container attach 2f8cbd134ca226e0d72336d6381b3ee7df3093698b3f0ce5da7486792cc165f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 09:46:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 05 09:46:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/105860258' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 05 09:46:17 compute-0 ceph-mon[74418]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:17 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/105860258' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/105860258' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 09:46:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Dec 05 09:46:17 compute-0 ecstatic_wiles[85213]: pool 'vms' created
Dec 05 09:46:17 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Dec 05 09:46:17 compute-0 systemd[1]: libpod-2f8cbd134ca226e0d72336d6381b3ee7df3093698b3f0ce5da7486792cc165f4.scope: Deactivated successfully.
Dec 05 09:46:17 compute-0 podman[85197]: 2025-12-05 09:46:17.418436693 +0000 UTC m=+1.878963243 container died 2f8cbd134ca226e0d72336d6381b3ee7df3093698b3f0ce5da7486792cc165f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 09:46:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6a361efc1502c80f2c20013aab55e4b63bbae4d2e7c524281d67b54df7b2342-merged.mount: Deactivated successfully.
Dec 05 09:46:17 compute-0 podman[85197]: 2025-12-05 09:46:17.463318502 +0000 UTC m=+1.923845052 container remove 2f8cbd134ca226e0d72336d6381b3ee7df3093698b3f0ce5da7486792cc165f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:46:17 compute-0 sudo[85194]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:46:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:46:17 compute-0 systemd[1]: libpod-conmon-2f8cbd134ca226e0d72336d6381b3ee7df3093698b3f0ce5da7486792cc165f4.scope: Deactivated successfully.
Dec 05 09:46:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:46:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:46:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 05 09:46:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 09:46:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:46:17 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:46:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:46:17 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:46:17 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:46:17 compute-0 sudo[85275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlnsuisvxernygdfzssjdomdhajgqkti ; /usr/bin/python3'
Dec 05 09:46:17 compute-0 sudo[85275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:17 compute-0 python3[85277]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:17 compute-0 podman[85278]: 2025-12-05 09:46:17.900320226 +0000 UTC m=+0.066928664 container create d26dc6c6328c38c2b9e6bf5c4ac069cce8a6911fe6e11575f2be4e7cd4cee1fc (image=quay.io/ceph/ceph:v19, name=competent_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:46:17 compute-0 systemd[1]: Started libpod-conmon-d26dc6c6328c38c2b9e6bf5c4ac069cce8a6911fe6e11575f2be4e7cd4cee1fc.scope.
Dec 05 09:46:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89f7a0ad7f2c95905f7de37229871a30096681cafeeae9b80be194ded471cac8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89f7a0ad7f2c95905f7de37229871a30096681cafeeae9b80be194ded471cac8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:17 compute-0 podman[85278]: 2025-12-05 09:46:17.860745735 +0000 UTC m=+0.027354203 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:17 compute-0 podman[85278]: 2025-12-05 09:46:17.967576778 +0000 UTC m=+0.134185236 container init d26dc6c6328c38c2b9e6bf5c4ac069cce8a6911fe6e11575f2be4e7cd4cee1fc (image=quay.io/ceph/ceph:v19, name=competent_sutherland, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:46:17 compute-0 podman[85278]: 2025-12-05 09:46:17.973600796 +0000 UTC m=+0.140209244 container start d26dc6c6328c38c2b9e6bf5c4ac069cce8a6911fe6e11575f2be4e7cd4cee1fc (image=quay.io/ceph/ceph:v19, name=competent_sutherland, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Dec 05 09:46:17 compute-0 podman[85278]: 2025-12-05 09:46:17.976744463 +0000 UTC m=+0.143352911 container attach d26dc6c6328c38c2b9e6bf5c4ac069cce8a6911fe6e11575f2be4e7cd4cee1fc (image=quay.io/ceph/ceph:v19, name=competent_sutherland, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:46:18 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:46:18 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:46:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:46:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 05 09:46:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1421271938' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v77: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/105860258' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 09:46:18 compute-0 ceph-mon[74418]: osdmap e13: 2 total, 2 up, 2 in
Dec 05 09:46:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 09:46:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:18 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:46:18 compute-0 ceph-mon[74418]: Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:46:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1421271938' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 09:46:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 05 09:46:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1421271938' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 09:46:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Dec 05 09:46:18 compute-0 competent_sutherland[85293]: pool 'volumes' created
Dec 05 09:46:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Dec 05 09:46:18 compute-0 systemd[1]: libpod-d26dc6c6328c38c2b9e6bf5c4ac069cce8a6911fe6e11575f2be4e7cd4cee1fc.scope: Deactivated successfully.
Dec 05 09:46:18 compute-0 podman[85278]: 2025-12-05 09:46:18.542484221 +0000 UTC m=+0.709092669 container died d26dc6c6328c38c2b9e6bf5c4ac069cce8a6911fe6e11575f2be4e7cd4cee1fc (image=quay.io/ceph/ceph:v19, name=competent_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:46:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:46:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-89f7a0ad7f2c95905f7de37229871a30096681cafeeae9b80be194ded471cac8-merged.mount: Deactivated successfully.
Dec 05 09:46:18 compute-0 podman[85278]: 2025-12-05 09:46:18.615493823 +0000 UTC m=+0.782102271 container remove d26dc6c6328c38c2b9e6bf5c4ac069cce8a6911fe6e11575f2be4e7cd4cee1fc (image=quay.io/ceph/ceph:v19, name=competent_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Dec 05 09:46:18 compute-0 systemd[1]: libpod-conmon-d26dc6c6328c38c2b9e6bf5c4ac069cce8a6911fe6e11575f2be4e7cd4cee1fc.scope: Deactivated successfully.
Dec 05 09:46:18 compute-0 sudo[85275]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:18 compute-0 sudo[85357]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsqmqpkwqvhngkcdokjxftyvzldfdccl ; /usr/bin/python3'
Dec 05 09:46:18 compute-0 sudo[85357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:18 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:46:18 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:46:18 compute-0 python3[85359]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:18 compute-0 podman[85360]: 2025-12-05 09:46:18.982678634 +0000 UTC m=+0.038539453 container create 43c0f5efebdb0d6ae409f6a829b2bceb5df6df0f0eb3742d26610a10c929e34c (image=quay.io/ceph/ceph:v19, name=hardcore_booth, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 05 09:46:19 compute-0 systemd[1]: Started libpod-conmon-43c0f5efebdb0d6ae409f6a829b2bceb5df6df0f0eb3742d26610a10c929e34c.scope.
Dec 05 09:46:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ace7a8977afbdffffd0f14142b4c42680ba7ad231e331bbbf4f33217856aa1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ace7a8977afbdffffd0f14142b4c42680ba7ad231e331bbbf4f33217856aa1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:19 compute-0 podman[85360]: 2025-12-05 09:46:19.034692412 +0000 UTC m=+0.090553241 container init 43c0f5efebdb0d6ae409f6a829b2bceb5df6df0f0eb3742d26610a10c929e34c (image=quay.io/ceph/ceph:v19, name=hardcore_booth, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:46:19 compute-0 podman[85360]: 2025-12-05 09:46:19.040145044 +0000 UTC m=+0.096005863 container start 43c0f5efebdb0d6ae409f6a829b2bceb5df6df0f0eb3742d26610a10c929e34c (image=quay.io/ceph/ceph:v19, name=hardcore_booth, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:46:19 compute-0 podman[85360]: 2025-12-05 09:46:19.043856867 +0000 UTC m=+0.099717706 container attach 43c0f5efebdb0d6ae409f6a829b2bceb5df6df0f0eb3742d26610a10c929e34c (image=quay.io/ceph/ceph:v19, name=hardcore_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 05 09:46:19 compute-0 podman[85360]: 2025-12-05 09:46:18.963636224 +0000 UTC m=+0.019497063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:19 compute-0 ceph-mon[74418]: Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:46:19 compute-0 ceph-mon[74418]: pgmap v77: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:19 compute-0 ceph-mon[74418]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 09:46:19 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1421271938' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 09:46:19 compute-0 ceph-mon[74418]: osdmap e14: 2 total, 2 up, 2 in
Dec 05 09:46:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 05 09:46:19 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3136489848' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:19 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:46:19 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:46:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 05 09:46:19 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3136489848' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 09:46:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Dec 05 09:46:19 compute-0 hardcore_booth[85375]: pool 'backups' created
Dec 05 09:46:19 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Dec 05 09:46:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 15 pg[4.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:46:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:46:19 compute-0 systemd[1]: libpod-43c0f5efebdb0d6ae409f6a829b2bceb5df6df0f0eb3742d26610a10c929e34c.scope: Deactivated successfully.
Dec 05 09:46:19 compute-0 podman[85360]: 2025-12-05 09:46:19.549035579 +0000 UTC m=+0.604896418 container died 43c0f5efebdb0d6ae409f6a829b2bceb5df6df0f0eb3742d26610a10c929e34c (image=quay.io/ceph/ceph:v19, name=hardcore_booth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 09:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1ace7a8977afbdffffd0f14142b4c42680ba7ad231e331bbbf4f33217856aa1-merged.mount: Deactivated successfully.
Dec 05 09:46:19 compute-0 podman[85360]: 2025-12-05 09:46:19.592163129 +0000 UTC m=+0.648023948 container remove 43c0f5efebdb0d6ae409f6a829b2bceb5df6df0f0eb3742d26610a10c929e34c (image=quay.io/ceph/ceph:v19, name=hardcore_booth, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 09:46:19 compute-0 sudo[85357]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:19 compute-0 systemd[1]: libpod-conmon-43c0f5efebdb0d6ae409f6a829b2bceb5df6df0f0eb3742d26610a10c929e34c.scope: Deactivated successfully.
Dec 05 09:46:19 compute-0 sudo[85437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlestloeweocwriiykrvrmzmttdghlxo ; /usr/bin/python3'
Dec 05 09:46:19 compute-0 sudo[85437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:19 compute-0 python3[85439]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:19 compute-0 podman[85440]: 2025-12-05 09:46:19.939298122 +0000 UTC m=+0.049266243 container create 76fdf0c318a284903b64746d1d77514d99a6ec398fcc0977cb288c68f28c468d (image=quay.io/ceph/ceph:v19, name=sweet_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:46:19 compute-0 systemd[1]: Started libpod-conmon-76fdf0c318a284903b64746d1d77514d99a6ec398fcc0977cb288c68f28c468d.scope.
Dec 05 09:46:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8570b3a9503067c0e7473a4d82e71715993088e08f3881b5978f6865368310/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8570b3a9503067c0e7473a4d82e71715993088e08f3881b5978f6865368310/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:20 compute-0 podman[85440]: 2025-12-05 09:46:20.010024551 +0000 UTC m=+0.119992702 container init 76fdf0c318a284903b64746d1d77514d99a6ec398fcc0977cb288c68f28c468d (image=quay.io/ceph/ceph:v19, name=sweet_joliot, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 09:46:20 compute-0 podman[85440]: 2025-12-05 09:46:19.91805022 +0000 UTC m=+0.028018361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:20 compute-0 podman[85440]: 2025-12-05 09:46:20.016047858 +0000 UTC m=+0.126015969 container start 76fdf0c318a284903b64746d1d77514d99a6ec398fcc0977cb288c68f28c468d (image=quay.io/ceph/ceph:v19, name=sweet_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:46:20 compute-0 podman[85440]: 2025-12-05 09:46:20.021406648 +0000 UTC m=+0.131374789 container attach 76fdf0c318a284903b64746d1d77514d99a6ec398fcc0977cb288c68f28c468d (image=quay.io/ceph/ceph:v19, name=sweet_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 09:46:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:46:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:46:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:46:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v80: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:20 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 5c9c88e7-82d4-4bd9-a365-b840802d4751 (Updating mon deployment (+2 -> 3))
Dec 05 09:46:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:46:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 09:46:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:46:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 09:46:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:46:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:20 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Dec 05 09:46:20 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Dec 05 09:46:20 compute-0 ceph-mon[74418]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:46:20 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3136489848' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:20 compute-0 ceph-mon[74418]: Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:46:20 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3136489848' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 09:46:20 compute-0 ceph-mon[74418]: osdmap e15: 2 total, 2 up, 2 in
Dec 05 09:46:20 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:20 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:20 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:20 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 09:46:20 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 09:46:20 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 05 09:46:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2100202848' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 05 09:46:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2100202848' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 09:46:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Dec 05 09:46:20 compute-0 sweet_joliot[85455]: pool 'images' created
Dec 05 09:46:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Dec 05 09:46:20 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 16 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:46:20 compute-0 systemd[1]: libpod-76fdf0c318a284903b64746d1d77514d99a6ec398fcc0977cb288c68f28c468d.scope: Deactivated successfully.
Dec 05 09:46:20 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 16 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:46:20 compute-0 podman[85440]: 2025-12-05 09:46:20.554997011 +0000 UTC m=+0.664965152 container died 76fdf0c318a284903b64746d1d77514d99a6ec398fcc0977cb288c68f28c468d (image=quay.io/ceph/ceph:v19, name=sweet_joliot, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb8570b3a9503067c0e7473a4d82e71715993088e08f3881b5978f6865368310-merged.mount: Deactivated successfully.
Dec 05 09:46:20 compute-0 podman[85440]: 2025-12-05 09:46:20.60343717 +0000 UTC m=+0.713405321 container remove 76fdf0c318a284903b64746d1d77514d99a6ec398fcc0977cb288c68f28c468d (image=quay.io/ceph/ceph:v19, name=sweet_joliot, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 09:46:20 compute-0 systemd[1]: libpod-conmon-76fdf0c318a284903b64746d1d77514d99a6ec398fcc0977cb288c68f28c468d.scope: Deactivated successfully.
Dec 05 09:46:20 compute-0 sudo[85437]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:20 compute-0 sudo[85517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhtwlcbqbqwpedcgwphuxfvdgldtfaeg ; /usr/bin/python3'
Dec 05 09:46:20 compute-0 sudo[85517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:20 compute-0 python3[85519]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:21 compute-0 podman[85520]: 2025-12-05 09:46:20.976503104 +0000 UTC m=+0.033013220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:21 compute-0 ceph-mgr[74711]: [progress WARNING root] Starting Global Recovery Event,3 pgs not in active + clean state
Dec 05 09:46:21 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 05 09:46:21 compute-0 podman[85520]: 2025-12-05 09:46:21.223038356 +0000 UTC m=+0.279548502 container create f6a78ad1049a429dfff9b186643f5aab981060504f696eb6f2eae6ee3d4517b5 (image=quay.io/ceph/ceph:v19, name=naughty_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 09:46:21 compute-0 systemd[1]: Started libpod-conmon-f6a78ad1049a429dfff9b186643f5aab981060504f696eb6f2eae6ee3d4517b5.scope.
Dec 05 09:46:21 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7a2312ffb0f700213c1da0ff8492480c714eff9e3db892d3af07e2356498b25/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7a2312ffb0f700213c1da0ff8492480c714eff9e3db892d3af07e2356498b25/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:21 compute-0 podman[85520]: 2025-12-05 09:46:21.353029503 +0000 UTC m=+0.409539619 container init f6a78ad1049a429dfff9b186643f5aab981060504f696eb6f2eae6ee3d4517b5 (image=quay.io/ceph/ceph:v19, name=naughty_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:46:21 compute-0 podman[85520]: 2025-12-05 09:46:21.35898036 +0000 UTC m=+0.415490456 container start f6a78ad1049a429dfff9b186643f5aab981060504f696eb6f2eae6ee3d4517b5 (image=quay.io/ceph/ceph:v19, name=naughty_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 05 09:46:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 05 09:46:21 compute-0 podman[85520]: 2025-12-05 09:46:21.589770224 +0000 UTC m=+0.646280320 container attach f6a78ad1049a429dfff9b186643f5aab981060504f696eb6f2eae6ee3d4517b5 (image=quay.io/ceph/ceph:v19, name=naughty_elbakyan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 09:46:21 compute-0 ceph-mon[74418]: pgmap v80: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:21 compute-0 ceph-mon[74418]: Deploying daemon mon.compute-2 on compute-2
Dec 05 09:46:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2100202848' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2100202848' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 09:46:21 compute-0 ceph-mon[74418]: osdmap e16: 2 total, 2 up, 2 in
Dec 05 09:46:21 compute-0 ceph-mon[74418]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 05 09:46:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Dec 05 09:46:21 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Dec 05 09:46:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 17 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:46:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 05 09:46:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2726857675' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v83: 5 pgs: 2 active+clean, 3 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 05 09:46:22 compute-0 ceph-mon[74418]: osdmap e17: 2 total, 2 up, 2 in
Dec 05 09:46:22 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2726857675' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:22 compute-0 ceph-mon[74418]: pgmap v83: 5 pgs: 2 active+clean, 3 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:22 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2726857675' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 09:46:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Dec 05 09:46:22 compute-0 naughty_elbakyan[85535]: pool 'cephfs.cephfs.meta' created
Dec 05 09:46:22 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Dec 05 09:46:22 compute-0 systemd[1]: libpod-f6a78ad1049a429dfff9b186643f5aab981060504f696eb6f2eae6ee3d4517b5.scope: Deactivated successfully.
Dec 05 09:46:22 compute-0 podman[85520]: 2025-12-05 09:46:22.792615595 +0000 UTC m=+1.849125721 container died f6a78ad1049a429dfff9b186643f5aab981060504f696eb6f2eae6ee3d4517b5 (image=quay.io/ceph/ceph:v19, name=naughty_elbakyan, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7a2312ffb0f700213c1da0ff8492480c714eff9e3db892d3af07e2356498b25-merged.mount: Deactivated successfully.
Dec 05 09:46:22 compute-0 podman[85520]: 2025-12-05 09:46:22.95546606 +0000 UTC m=+2.011976156 container remove f6a78ad1049a429dfff9b186643f5aab981060504f696eb6f2eae6ee3d4517b5 (image=quay.io/ceph/ceph:v19, name=naughty_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 09:46:22 compute-0 systemd[1]: libpod-conmon-f6a78ad1049a429dfff9b186643f5aab981060504f696eb6f2eae6ee3d4517b5.scope: Deactivated successfully.
Dec 05 09:46:22 compute-0 sudo[85517]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:46:23 compute-0 sudo[85598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raifawiftnhnzbcdytuzbjoendbvhiwu ; /usr/bin/python3'
Dec 05 09:46:23 compute-0 sudo[85598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:23 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:46:23 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 05 09:46:23 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:46:23 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:46:23 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:46:23 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:23 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Dec 05 09:46:23 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Dec 05 09:46:23 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4193930644; not ready for session (expect reconnect)
Dec 05 09:46:23 compute-0 python3[85600]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 05 09:46:23 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:46:23 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Dec 05 09:46:23 compute-0 podman[85601]: 2025-12-05 09:46:23.436753275 +0000 UTC m=+0.100599921 container create c7c55801437d6f19385ae320691318baef74a4d15ec4fe4a526dd438d7a48902 (image=quay.io/ceph/ceph:v19, name=boring_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 05 09:46:23 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 05 09:46:23 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:23 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 05 09:46:23 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 05 09:46:23 compute-0 ceph-mon[74418]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:46:23 compute-0 podman[85601]: 2025-12-05 09:46:23.36182017 +0000 UTC m=+0.025666856 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:23 compute-0 systemd[1]: Started libpod-conmon-c7c55801437d6f19385ae320691318baef74a4d15ec4fe4a526dd438d7a48902.scope.
Dec 05 09:46:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74225ca5ce033b2db0df442ebd7b155efad4d7e9f48a2b62e091599e21c62b2b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74225ca5ce033b2db0df442ebd7b155efad4d7e9f48a2b62e091599e21c62b2b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:23 compute-0 podman[85601]: 2025-12-05 09:46:23.494061101 +0000 UTC m=+0.157907757 container init c7c55801437d6f19385ae320691318baef74a4d15ec4fe4a526dd438d7a48902 (image=quay.io/ceph/ceph:v19, name=boring_jennings, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 05 09:46:23 compute-0 podman[85601]: 2025-12-05 09:46:23.499879133 +0000 UTC m=+0.163725769 container start c7c55801437d6f19385ae320691318baef74a4d15ec4fe4a526dd438d7a48902 (image=quay.io/ceph/ceph:v19, name=boring_jennings, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 05 09:46:23 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 05 09:46:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v85: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:24 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 05 09:46:24 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4193930644; not ready for session (expect reconnect)
Dec 05 09:46:24 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 05 09:46:24 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:24 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 05 09:46:24 compute-0 podman[85601]: 2025-12-05 09:46:24.398805295 +0000 UTC m=+1.062651961 container attach c7c55801437d6f19385ae320691318baef74a4d15ec4fe4a526dd438d7a48902 (image=quay.io/ceph/ceph:v19, name=boring_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 09:46:24 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 18 pg[6.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:46:25 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 05 09:46:25 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4193930644; not ready for session (expect reconnect)
Dec 05 09:46:25 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 05 09:46:25 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:25 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 05 09:46:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:46:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:46:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:46:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:46:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:46:26 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:46:26 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 538981dc-4e36-4013-8ba3-4eaefb728747 (Global Recovery Event) in 5 seconds
Dec 05 09:46:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v86: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:26 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4193930644; not ready for session (expect reconnect)
Dec 05 09:46:26 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 05 09:46:26 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:26 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 05 09:46:26 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 05 09:46:26 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 05 09:46:27 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:46:27 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 05 09:46:27 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 05 09:46:27 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 05 09:46:27 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3383223143; not ready for session (expect reconnect)
Dec 05 09:46:27 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:46:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:27 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 05 09:46:27 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 05 09:46:27 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4193930644; not ready for session (expect reconnect)
Dec 05 09:46:27 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 05 09:46:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:27 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 05 09:46:28 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3383223143; not ready for session (expect reconnect)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 05 09:46:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v87: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:28 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4193930644; not ready for session (expect reconnect)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 05 09:46:28 compute-0 ceph-mon[74418]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : monmap epoch 2
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : last_changed 2025-12-05T09:46:23.237959+0000
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : created 2025-12-05T09:43:16.088283+0000
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap 
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.hvnxai(active, since 2m)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 4 pool(s) do not have an application enabled
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.0 observed slow operation indications in BlueStore
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:28 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 5c9c88e7-82d4-4bd9-a365-b840802d4751 (Updating mon deployment (+2 -> 3))
Dec 05 09:46:28 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 5c9c88e7-82d4-4bd9-a365-b840802d4751 (Updating mon deployment (+2 -> 3)) in 8 seconds
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:28 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 153adee1-62b4-41db-8e7b-628afcd82845 (Updating mgr deployment (+2 -> 3))
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.wewrgp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.wewrgp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mon[74418]: Deploying daemon mon.compute-1 on compute-1
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0 calling monitor election
Dec 05 09:46:28 compute-0 ceph-mon[74418]: pgmap v85: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-2 calling monitor election
Dec 05 09:46:28 compute-0 ceph-mon[74418]: pgmap v86: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mon[74418]: pgmap v87: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: monmap epoch 2
Dec 05 09:46:28 compute-0 ceph-mon[74418]: fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:46:28 compute-0 ceph-mon[74418]: last_changed 2025-12-05T09:46:23.237959+0000
Dec 05 09:46:28 compute-0 ceph-mon[74418]: created 2025-12-05T09:43:16.088283+0000
Dec 05 09:46:28 compute-0 ceph-mon[74418]: min_mon_release 19 (squid)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: election_strategy: 1
Dec 05 09:46:28 compute-0 ceph-mon[74418]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 05 09:46:28 compute-0 ceph-mon[74418]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 05 09:46:28 compute-0 ceph-mon[74418]: fsmap 
Dec 05 09:46:28 compute-0 ceph-mon[74418]: osdmap e18: 2 total, 2 up, 2 in
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mgrmap e9: compute-0.hvnxai(active, since 2m)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 4 pool(s) do not have an application enabled
Dec 05 09:46:28 compute-0 ceph-mon[74418]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Dec 05 09:46:28 compute-0 ceph-mon[74418]:      osd.0 observed slow operation indications in BlueStore
Dec 05 09:46:28 compute-0 ceph-mon[74418]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Dec 05 09:46:28 compute-0 ceph-mon[74418]:     application not enabled on pool 'vms'
Dec 05 09:46:28 compute-0 ceph-mon[74418]:     application not enabled on pool 'volumes'
Dec 05 09:46:28 compute-0 ceph-mon[74418]:     application not enabled on pool 'backups'
Dec 05 09:46:28 compute-0 ceph-mon[74418]:     application not enabled on pool 'images'
Dec 05 09:46:28 compute-0 ceph-mon[74418]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:28 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.wewrgp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:46:28 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:28 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.wewrgp on compute-2
Dec 05 09:46:28 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.wewrgp on compute-2
Dec 05 09:46:28 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 19 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:46:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 05 09:46:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Dec 05 09:46:29 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3383223143; not ready for session (expect reconnect)
Dec 05 09:46:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:46:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:29 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 05 09:46:29 compute-0 ceph-mon[74418]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 05 09:46:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:46:29 compute-0 ceph-mon[74418]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:46:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:29 compute-0 ceph-mon[74418]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 05 09:46:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:29 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 05 09:46:29 compute-0 ceph-mon[74418]: paxos.0).electionLogic(10) init, last seen epoch 10
Dec 05 09:46:29 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 05 09:46:29 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:46:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:46:29.241+0000 7f9e57f14640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Dec 05 09:46:29 compute-0 ceph-mgr[74711]: mgr.server handle_report got status from non-daemon mon.compute-2
Dec 05 09:46:29 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:29 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:30 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3383223143; not ready for session (expect reconnect)
Dec 05 09:46:30 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:46:30 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:30 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 05 09:46:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v89: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:30 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:30 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:30 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:46:30 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:31 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:31 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:31 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 4 completed events
Dec 05 09:46:31 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:46:31 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3383223143; not ready for session (expect reconnect)
Dec 05 09:46:31 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:46:31 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:31 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 05 09:46:31 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:32 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3383223143; not ready for session (expect reconnect)
Dec 05 09:46:32 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:46:32 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:32 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 05 09:46:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v90: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:32 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:32 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:33 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3383223143; not ready for session (expect reconnect)
Dec 05 09:46:33 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:46:33 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:33 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 05 09:46:33 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:33 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:33 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 09:46:34 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3383223143; not ready for session (expect reconnect)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 05 09:46:34 compute-0 ceph-mon[74418]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Dec 05 09:46:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v91: 6 pgs: 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : monmap epoch 3
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : last_changed 2025-12-05T09:46:29.159401+0000
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : created 2025-12-05T09:43:16.088283+0000
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap 
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.hvnxai(active, since 2m)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 5 pool(s) do not have an application enabled
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.0 observed slow operation indications in BlueStore
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.unhddt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.unhddt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mon[74418]: Deploying daemon mgr.compute-2.wewrgp on compute-2
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0 calling monitor election
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-2 calling monitor election
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mon[74418]: pgmap v89: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-1 calling monitor election
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mon[74418]: pgmap v90: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: monmap epoch 3
Dec 05 09:46:34 compute-0 ceph-mon[74418]: fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:46:34 compute-0 ceph-mon[74418]: last_changed 2025-12-05T09:46:29.159401+0000
Dec 05 09:46:34 compute-0 ceph-mon[74418]: created 2025-12-05T09:43:16.088283+0000
Dec 05 09:46:34 compute-0 ceph-mon[74418]: min_mon_release 19 (squid)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: election_strategy: 1
Dec 05 09:46:34 compute-0 ceph-mon[74418]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 05 09:46:34 compute-0 ceph-mon[74418]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 05 09:46:34 compute-0 ceph-mon[74418]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 05 09:46:34 compute-0 ceph-mon[74418]: fsmap 
Dec 05 09:46:34 compute-0 ceph-mon[74418]: osdmap e19: 2 total, 2 up, 2 in
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mgrmap e9: compute-0.hvnxai(active, since 2m)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 5 pool(s) do not have an application enabled
Dec 05 09:46:34 compute-0 ceph-mon[74418]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Dec 05 09:46:34 compute-0 ceph-mon[74418]:      osd.0 observed slow operation indications in BlueStore
Dec 05 09:46:34 compute-0 ceph-mon[74418]: [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled
Dec 05 09:46:34 compute-0 ceph-mon[74418]:     application not enabled on pool 'vms'
Dec 05 09:46:34 compute-0 ceph-mon[74418]:     application not enabled on pool 'volumes'
Dec 05 09:46:34 compute-0 ceph-mon[74418]:     application not enabled on pool 'backups'
Dec 05 09:46:34 compute-0 ceph-mon[74418]:     application not enabled on pool 'images'
Dec 05 09:46:34 compute-0 ceph-mon[74418]:     application not enabled on pool 'cephfs.cephfs.meta'
Dec 05 09:46:34 compute-0 ceph-mon[74418]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:34 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.unhddt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:46:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:34 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.unhddt on compute-1
Dec 05 09:46:34 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.unhddt on compute-1
Dec 05 09:46:35 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3383223143; not ready for session (expect reconnect)
Dec 05 09:46:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:46:35 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:35 compute-0 ceph-mon[74418]: pgmap v91: 6 pgs: 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:35 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.unhddt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 09:46:35 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.unhddt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 05 09:46:35 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 09:46:35 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:35 compute-0 ceph-mon[74418]: Deploying daemon mgr.compute-1.unhddt on compute-1
Dec 05 09:46:35 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:46:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 05 09:46:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3421849166' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:46:36.160+0000 7f9e57f14640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Dec 05 09:46:36 compute-0 ceph-mgr[74711]: mgr.server handle_report got status from non-daemon mon.compute-1
Dec 05 09:46:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:46:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v92: 6 pgs: 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:46:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 05 09:46:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:36 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 153adee1-62b4-41db-8e7b-628afcd82845 (Updating mgr deployment (+2 -> 3))
Dec 05 09:46:36 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 153adee1-62b4-41db-8e7b-628afcd82845 (Updating mgr deployment (+2 -> 3)) in 8 seconds
Dec 05 09:46:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 05 09:46:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:36 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 8ea55df2-183a-4e14-9e72-d3356c2922e3 (Updating crash deployment (+1 -> 3))
Dec 05 09:46:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:46:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 09:46:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 05 09:46:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:46:36 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:36 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Dec 05 09:46:36 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Dec 05 09:46:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 05 09:46:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3421849166' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 09:46:36 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:36 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:36 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:36 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:36 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 09:46:36 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 05 09:46:36 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3421849166' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 09:46:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Dec 05 09:46:36 compute-0 boring_jennings[85617]: pool 'cephfs.cephfs.data' created
Dec 05 09:46:36 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Dec 05 09:46:36 compute-0 systemd[1]: libpod-c7c55801437d6f19385ae320691318baef74a4d15ec4fe4a526dd438d7a48902.scope: Deactivated successfully.
Dec 05 09:46:36 compute-0 podman[85601]: 2025-12-05 09:46:36.318760272 +0000 UTC m=+12.982606918 container died c7c55801437d6f19385ae320691318baef74a4d15ec4fe4a526dd438d7a48902 (image=quay.io/ceph/ceph:v19, name=boring_jennings, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:46:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-74225ca5ce033b2db0df442ebd7b155efad4d7e9f48a2b62e091599e21c62b2b-merged.mount: Deactivated successfully.
Dec 05 09:46:36 compute-0 podman[85601]: 2025-12-05 09:46:36.364409162 +0000 UTC m=+13.028255808 container remove c7c55801437d6f19385ae320691318baef74a4d15ec4fe4a526dd438d7a48902 (image=quay.io/ceph/ceph:v19, name=boring_jennings, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:46:36 compute-0 systemd[1]: libpod-conmon-c7c55801437d6f19385ae320691318baef74a4d15ec4fe4a526dd438d7a48902.scope: Deactivated successfully.
Dec 05 09:46:36 compute-0 sudo[85598]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:36 compute-0 sudo[85679]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjewhdvpnedwjmcpkoecpaxkltnkxgjz ; /usr/bin/python3'
Dec 05 09:46:36 compute-0 sudo[85679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:36 compute-0 python3[85681]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:36 compute-0 podman[85682]: 2025-12-05 09:46:36.767866273 +0000 UTC m=+0.028053292 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:36 compute-0 podman[85682]: 2025-12-05 09:46:36.961794071 +0000 UTC m=+0.221981080 container create f93cb2c7cf4f3ea3a1dce8108bb28c1fa2f26813e4a03eac174a21795229c11d (image=quay.io/ceph/ceph:v19, name=interesting_hawking, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:46:37 compute-0 systemd[1]: Started libpod-conmon-f93cb2c7cf4f3ea3a1dce8108bb28c1fa2f26813e4a03eac174a21795229c11d.scope.
Dec 05 09:46:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b495d44784129749b195d7fa6e600f0676e9f67c9434e5ac0ede36d20bca21/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b495d44784129749b195d7fa6e600f0676e9f67c9434e5ac0ede36d20bca21/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:37 compute-0 podman[85682]: 2025-12-05 09:46:37.028650462 +0000 UTC m=+0.288837471 container init f93cb2c7cf4f3ea3a1dce8108bb28c1fa2f26813e4a03eac174a21795229c11d (image=quay.io/ceph/ceph:v19, name=interesting_hawking, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:46:37 compute-0 podman[85682]: 2025-12-05 09:46:37.036653955 +0000 UTC m=+0.296840964 container start f93cb2c7cf4f3ea3a1dce8108bb28c1fa2f26813e4a03eac174a21795229c11d (image=quay.io/ceph/ceph:v19, name=interesting_hawking, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 05 09:46:37 compute-0 podman[85682]: 2025-12-05 09:46:37.040288576 +0000 UTC m=+0.300475675 container attach f93cb2c7cf4f3ea3a1dce8108bb28c1fa2f26813e4a03eac174a21795229c11d (image=quay.io/ceph/ceph:v19, name=interesting_hawking, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:46:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 05 09:46:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 05 09:46:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2932360286' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 05 09:46:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:46:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v94: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:38 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 09:46:38 compute-0 ceph-mon[74418]: pgmap v92: 6 pgs: 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:38 compute-0 ceph-mon[74418]: Deploying daemon crash.compute-2 on compute-2
Dec 05 09:46:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3421849166' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 09:46:38 compute-0 ceph-mon[74418]: osdmap e20: 2 total, 2 up, 2 in
Dec 05 09:46:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Dec 05 09:46:38 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:38 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Dec 05 09:46:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:46:38 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 05 09:46:38 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:38 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 8ea55df2-183a-4e14-9e72-d3356c2922e3 (Updating crash deployment (+1 -> 3))
Dec 05 09:46:38 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 8ea55df2-183a-4e14-9e72-d3356c2922e3 (Updating crash deployment (+1 -> 3)) in 2 seconds
Dec 05 09:46:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 05 09:46:38 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:46:38 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:46:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:46:38 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:46:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:46:38 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:46:38 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:46:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:46:38 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:38 compute-0 sudo[85721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:46:38 compute-0 sudo[85721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:46:38 compute-0 sudo[85721]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:38 compute-0 sudo[85746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:46:38 compute-0 sudo[85746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:46:39 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 6 completed events
Dec 05 09:46:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:46:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:39 compute-0 podman[85810]: 2025-12-05 09:46:39.378190443 +0000 UTC m=+0.061388530 container create c893090a9917ddef3fdf747c80468a144d749a10c403cff1f73b9fe3e34f2d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bohr, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:46:39 compute-0 systemd[1]: Started libpod-conmon-c893090a9917ddef3fdf747c80468a144d749a10c403cff1f73b9fe3e34f2d14.scope.
Dec 05 09:46:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:39 compute-0 podman[85810]: 2025-12-05 09:46:39.445147957 +0000 UTC m=+0.128346074 container init c893090a9917ddef3fdf747c80468a144d749a10c403cff1f73b9fe3e34f2d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 09:46:39 compute-0 podman[85810]: 2025-12-05 09:46:39.354325118 +0000 UTC m=+0.037523235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:46:39 compute-0 podman[85810]: 2025-12-05 09:46:39.452536722 +0000 UTC m=+0.135734809 container start c893090a9917ddef3fdf747c80468a144d749a10c403cff1f73b9fe3e34f2d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bohr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 05 09:46:39 compute-0 podman[85810]: 2025-12-05 09:46:39.456008669 +0000 UTC m=+0.139206776 container attach c893090a9917ddef3fdf747c80468a144d749a10c403cff1f73b9fe3e34f2d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:46:39 compute-0 competent_bohr[85827]: 167 167
Dec 05 09:46:39 compute-0 systemd[1]: libpod-c893090a9917ddef3fdf747c80468a144d749a10c403cff1f73b9fe3e34f2d14.scope: Deactivated successfully.
Dec 05 09:46:39 compute-0 podman[85810]: 2025-12-05 09:46:39.461367329 +0000 UTC m=+0.144565436 container died c893090a9917ddef3fdf747c80468a144d749a10c403cff1f73b9fe3e34f2d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bdcb56677d906eb7ed2380fd3d6ebdc1ba4ad79c332902db3884c2883520c27-merged.mount: Deactivated successfully.
Dec 05 09:46:39 compute-0 podman[85810]: 2025-12-05 09:46:39.508276394 +0000 UTC m=+0.191474481 container remove c893090a9917ddef3fdf747c80468a144d749a10c403cff1f73b9fe3e34f2d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bohr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:46:39 compute-0 systemd[1]: libpod-conmon-c893090a9917ddef3fdf747c80468a144d749a10c403cff1f73b9fe3e34f2d14.scope: Deactivated successfully.
Dec 05 09:46:39 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2932360286' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 05 09:46:39 compute-0 ceph-mon[74418]: pgmap v94: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:39 compute-0 ceph-mon[74418]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 09:46:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:39 compute-0 ceph-mon[74418]: osdmap e21: 2 total, 2 up, 2 in
Dec 05 09:46:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:46:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:46:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:46:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:39 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 05 09:46:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2932360286' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 05 09:46:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Dec 05 09:46:39 compute-0 interesting_hawking[85697]: enabled application 'rbd' on pool 'vms'
Dec 05 09:46:39 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Dec 05 09:46:39 compute-0 systemd[1]: libpod-f93cb2c7cf4f3ea3a1dce8108bb28c1fa2f26813e4a03eac174a21795229c11d.scope: Deactivated successfully.
Dec 05 09:46:39 compute-0 podman[85848]: 2025-12-05 09:46:39.658631369 +0000 UTC m=+0.046604619 container died f93cb2c7cf4f3ea3a1dce8108bb28c1fa2f26813e4a03eac174a21795229c11d (image=quay.io/ceph/ceph:v19, name=interesting_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 09:46:39 compute-0 podman[85857]: 2025-12-05 09:46:39.680074326 +0000 UTC m=+0.046374322 container create 2c20816625ecd8a6a20150d393a79537b095671b4bb07bfff6f64e1404e62c5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5b495d44784129749b195d7fa6e600f0676e9f67c9434e5ac0ede36d20bca21-merged.mount: Deactivated successfully.
Dec 05 09:46:39 compute-0 podman[85848]: 2025-12-05 09:46:39.710182194 +0000 UTC m=+0.098155424 container remove f93cb2c7cf4f3ea3a1dce8108bb28c1fa2f26813e4a03eac174a21795229c11d (image=quay.io/ceph/ceph:v19, name=interesting_hawking, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 09:46:39 compute-0 systemd[1]: libpod-conmon-f93cb2c7cf4f3ea3a1dce8108bb28c1fa2f26813e4a03eac174a21795229c11d.scope: Deactivated successfully.
Dec 05 09:46:39 compute-0 systemd[1]: Started libpod-conmon-2c20816625ecd8a6a20150d393a79537b095671b4bb07bfff6f64e1404e62c5e.scope.
Dec 05 09:46:39 compute-0 sudo[85679]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:39 compute-0 podman[85857]: 2025-12-05 09:46:39.65758736 +0000 UTC m=+0.023887406 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:46:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0adca3fa9a179ab470ac737fb509e469c4c55a9190008a5ebc6db5f755edf6f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0adca3fa9a179ab470ac737fb509e469c4c55a9190008a5ebc6db5f755edf6f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0adca3fa9a179ab470ac737fb509e469c4c55a9190008a5ebc6db5f755edf6f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0adca3fa9a179ab470ac737fb509e469c4c55a9190008a5ebc6db5f755edf6f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0adca3fa9a179ab470ac737fb509e469c4c55a9190008a5ebc6db5f755edf6f7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:39 compute-0 podman[85857]: 2025-12-05 09:46:39.783408913 +0000 UTC m=+0.149708939 container init 2c20816625ecd8a6a20150d393a79537b095671b4bb07bfff6f64e1404e62c5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 09:46:39 compute-0 podman[85857]: 2025-12-05 09:46:39.792444404 +0000 UTC m=+0.158744400 container start 2c20816625ecd8a6a20150d393a79537b095671b4bb07bfff6f64e1404e62c5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 05 09:46:39 compute-0 podman[85857]: 2025-12-05 09:46:39.795269733 +0000 UTC m=+0.161569819 container attach 2c20816625ecd8a6a20150d393a79537b095671b4bb07bfff6f64e1404e62c5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:46:39 compute-0 sudo[85913]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvqadhvilfrlekopzlamolwqvpuiorns ; /usr/bin/python3'
Dec 05 09:46:39 compute-0 sudo[85913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:40 compute-0 python3[85915]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:40 compute-0 podman[85922]: 2025-12-05 09:46:40.096807986 +0000 UTC m=+0.042982888 container create 85c8b330f6fff43893963117c20b3bdbfad825cfab3ec70a0f411d833ddee0c4 (image=quay.io/ceph/ceph:v19, name=flamboyant_euclid, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:46:40 compute-0 goofy_gagarin[85885]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:46:40 compute-0 goofy_gagarin[85885]: --> All data devices are unavailable
Dec 05 09:46:40 compute-0 systemd[1]: Started libpod-conmon-85c8b330f6fff43893963117c20b3bdbfad825cfab3ec70a0f411d833ddee0c4.scope.
Dec 05 09:46:40 compute-0 podman[85857]: 2025-12-05 09:46:40.156043965 +0000 UTC m=+0.522343951 container died 2c20816625ecd8a6a20150d393a79537b095671b4bb07bfff6f64e1404e62c5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 05 09:46:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:40 compute-0 systemd[1]: libpod-2c20816625ecd8a6a20150d393a79537b095671b4bb07bfff6f64e1404e62c5e.scope: Deactivated successfully.
Dec 05 09:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa36d6c219bad53475a8a3f47dbf68ff9678b457f6339c53c65482905a9ff48/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa36d6c219bad53475a8a3f47dbf68ff9678b457f6339c53c65482905a9ff48/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:40 compute-0 podman[85922]: 2025-12-05 09:46:40.078901528 +0000 UTC m=+0.025076450 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v97: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:40 compute-0 podman[85922]: 2025-12-05 09:46:40.191152622 +0000 UTC m=+0.137327544 container init 85c8b330f6fff43893963117c20b3bdbfad825cfab3ec70a0f411d833ddee0c4 (image=quay.io/ceph/ceph:v19, name=flamboyant_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 05 09:46:40 compute-0 podman[85922]: 2025-12-05 09:46:40.202315613 +0000 UTC m=+0.148490515 container start 85c8b330f6fff43893963117c20b3bdbfad825cfab3ec70a0f411d833ddee0c4 (image=quay.io/ceph/ceph:v19, name=flamboyant_euclid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:46:40 compute-0 podman[85922]: 2025-12-05 09:46:40.20869384 +0000 UTC m=+0.154868772 container attach 85c8b330f6fff43893963117c20b3bdbfad825cfab3ec70a0f411d833ddee0c4 (image=quay.io/ceph/ceph:v19, name=flamboyant_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:46:40 compute-0 podman[85857]: 2025-12-05 09:46:40.215648734 +0000 UTC m=+0.581948730 container remove 2c20816625ecd8a6a20150d393a79537b095671b4bb07bfff6f64e1404e62c5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:46:40 compute-0 systemd[1]: libpod-conmon-2c20816625ecd8a6a20150d393a79537b095671b4bb07bfff6f64e1404e62c5e.scope: Deactivated successfully.
Dec 05 09:46:40 compute-0 sudo[85746]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:40 compute-0 sudo[85955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:46:40 compute-0 sudo[85955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:46:40 compute-0 sudo[85955]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-0adca3fa9a179ab470ac737fb509e469c4c55a9190008a5ebc6db5f755edf6f7-merged.mount: Deactivated successfully.
Dec 05 09:46:40 compute-0 sudo[85999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:46:40 compute-0 sudo[85999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:46:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 05 09:46:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4152731759' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 05 09:46:40 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2932360286' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 05 09:46:40 compute-0 ceph-mon[74418]: osdmap e22: 2 total, 2 up, 2 in
Dec 05 09:46:40 compute-0 ceph-mon[74418]: pgmap v97: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 05 09:46:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4152731759' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 05 09:46:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Dec 05 09:46:40 compute-0 flamboyant_euclid[85939]: enabled application 'rbd' on pool 'volumes'
Dec 05 09:46:40 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Dec 05 09:46:40 compute-0 systemd[1]: libpod-85c8b330f6fff43893963117c20b3bdbfad825cfab3ec70a0f411d833ddee0c4.scope: Deactivated successfully.
Dec 05 09:46:40 compute-0 podman[86038]: 2025-12-05 09:46:40.652741351 +0000 UTC m=+0.026509140 container died 85c8b330f6fff43893963117c20b3bdbfad825cfab3ec70a0f411d833ddee0c4 (image=quay.io/ceph/ceph:v19, name=flamboyant_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:46:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "d28ba23f-1aba-42a5-a51e-f784cf052fe9"} v 0)
Dec 05 09:46:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d28ba23f-1aba-42a5-a51e-f784cf052fe9"}]: dispatch
Dec 05 09:46:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 05 09:46:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d28ba23f-1aba-42a5-a51e-f784cf052fe9"}]': finished
Dec 05 09:46:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Dec 05 09:46:40 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Dec 05 09:46:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:46:40 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:40 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fa36d6c219bad53475a8a3f47dbf68ff9678b457f6339c53c65482905a9ff48-merged.mount: Deactivated successfully.
Dec 05 09:46:40 compute-0 podman[86051]: 2025-12-05 09:46:40.743414284 +0000 UTC m=+0.084447631 container remove 85c8b330f6fff43893963117c20b3bdbfad825cfab3ec70a0f411d833ddee0c4 (image=quay.io/ceph/ceph:v19, name=flamboyant_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 09:46:40 compute-0 systemd[1]: libpod-conmon-85c8b330f6fff43893963117c20b3bdbfad825cfab3ec70a0f411d833ddee0c4.scope: Deactivated successfully.
Dec 05 09:46:40 compute-0 sudo[85913]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:40 compute-0 podman[86078]: 2025-12-05 09:46:40.821529879 +0000 UTC m=+0.042969807 container create 7961786a5962b63e0edbdf64521dff0c90d45ccc9ce32c16ffbeba04c1e43a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 09:46:40 compute-0 systemd[1]: Started libpod-conmon-7961786a5962b63e0edbdf64521dff0c90d45ccc9ce32c16ffbeba04c1e43a27.scope.
Dec 05 09:46:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:40 compute-0 podman[86078]: 2025-12-05 09:46:40.890566008 +0000 UTC m=+0.112005956 container init 7961786a5962b63e0edbdf64521dff0c90d45ccc9ce32c16ffbeba04c1e43a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:46:40 compute-0 podman[86078]: 2025-12-05 09:46:40.897657747 +0000 UTC m=+0.119097675 container start 7961786a5962b63e0edbdf64521dff0c90d45ccc9ce32c16ffbeba04c1e43a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 05 09:46:40 compute-0 podman[86078]: 2025-12-05 09:46:40.805655767 +0000 UTC m=+0.027095715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:46:40 compute-0 modest_khayyam[86095]: 167 167
Dec 05 09:46:40 compute-0 podman[86078]: 2025-12-05 09:46:40.901056703 +0000 UTC m=+0.122496641 container attach 7961786a5962b63e0edbdf64521dff0c90d45ccc9ce32c16ffbeba04c1e43a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:46:40 compute-0 systemd[1]: libpod-7961786a5962b63e0edbdf64521dff0c90d45ccc9ce32c16ffbeba04c1e43a27.scope: Deactivated successfully.
Dec 05 09:46:40 compute-0 podman[86078]: 2025-12-05 09:46:40.902852834 +0000 UTC m=+0.124292762 container died 7961786a5962b63e0edbdf64521dff0c90d45ccc9ce32c16ffbeba04c1e43a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 05 09:46:40 compute-0 sudo[86122]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nopxjqqmqkfwhhzdqiwrbrjxtydubnvc ; /usr/bin/python3'
Dec 05 09:46:40 compute-0 sudo[86122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea013605f1d869b6df4ab137ec0498a797d08b2a691b12473b486d8ee7416408-merged.mount: Deactivated successfully.
Dec 05 09:46:40 compute-0 podman[86078]: 2025-12-05 09:46:40.94033092 +0000 UTC m=+0.161770848 container remove 7961786a5962b63e0edbdf64521dff0c90d45ccc9ce32c16ffbeba04c1e43a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 09:46:40 compute-0 systemd[1]: libpod-conmon-7961786a5962b63e0edbdf64521dff0c90d45ccc9ce32c16ffbeba04c1e43a27.scope: Deactivated successfully.
Dec 05 09:46:41 compute-0 python3[86126]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:41 compute-0 podman[86145]: 2025-12-05 09:46:41.099570827 +0000 UTC m=+0.042285272 container create 2857656b728e5b252be52ae8cde3f5a9b9e4587b84c843251ced547c1613fe0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swirles, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 09:46:41 compute-0 podman[86147]: 2025-12-05 09:46:41.11635173 +0000 UTC m=+0.052010116 container create 6aa66037c25a25cc546aeefd1c593f8c07e636a9a369f6e9e1bc861af7437c7b (image=quay.io/ceph/ceph:v19, name=tender_khayyam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:46:41 compute-0 podman[86145]: 2025-12-05 09:46:41.080911642 +0000 UTC m=+0.023626107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:46:41 compute-0 systemd[1]: Started libpod-conmon-2857656b728e5b252be52ae8cde3f5a9b9e4587b84c843251ced547c1613fe0c.scope.
Dec 05 09:46:41 compute-0 systemd[1]: Started libpod-conmon-6aa66037c25a25cc546aeefd1c593f8c07e636a9a369f6e9e1bc861af7437c7b.scope.
Dec 05 09:46:41 compute-0 podman[86147]: 2025-12-05 09:46:41.092533899 +0000 UTC m=+0.028192305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b694387eeacb254f9db1eba19fcfd82f1a1dab48893e0b83d7504fc327cc1a2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98e6687fd50f8654e66f17e6fd6f28afeb7590efef77273111f6773deac0e08/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98e6687fd50f8654e66f17e6fd6f28afeb7590efef77273111f6773deac0e08/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b694387eeacb254f9db1eba19fcfd82f1a1dab48893e0b83d7504fc327cc1a2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b694387eeacb254f9db1eba19fcfd82f1a1dab48893e0b83d7504fc327cc1a2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b694387eeacb254f9db1eba19fcfd82f1a1dab48893e0b83d7504fc327cc1a2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:41 compute-0 podman[86147]: 2025-12-05 09:46:41.221562745 +0000 UTC m=+0.157221151 container init 6aa66037c25a25cc546aeefd1c593f8c07e636a9a369f6e9e1bc861af7437c7b (image=quay.io/ceph/ceph:v19, name=tender_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 09:46:41 compute-0 podman[86145]: 2025-12-05 09:46:41.22704241 +0000 UTC m=+0.169756865 container init 2857656b728e5b252be52ae8cde3f5a9b9e4587b84c843251ced547c1613fe0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swirles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 09:46:41 compute-0 podman[86147]: 2025-12-05 09:46:41.230288151 +0000 UTC m=+0.165946537 container start 6aa66037c25a25cc546aeefd1c593f8c07e636a9a369f6e9e1bc861af7437c7b (image=quay.io/ceph/ceph:v19, name=tender_khayyam, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Dec 05 09:46:41 compute-0 podman[86147]: 2025-12-05 09:46:41.233527472 +0000 UTC m=+0.169185858 container attach 6aa66037c25a25cc546aeefd1c593f8c07e636a9a369f6e9e1bc861af7437c7b (image=quay.io/ceph/ceph:v19, name=tender_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 05 09:46:41 compute-0 podman[86145]: 2025-12-05 09:46:41.233787299 +0000 UTC m=+0.176501744 container start 2857656b728e5b252be52ae8cde3f5a9b9e4587b84c843251ced547c1613fe0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swirles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 09:46:41 compute-0 podman[86145]: 2025-12-05 09:46:41.237363381 +0000 UTC m=+0.180077826 container attach 2857656b728e5b252be52ae8cde3f5a9b9e4587b84c843251ced547c1613fe0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swirles, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:46:41 compute-0 keen_swirles[86177]: {
Dec 05 09:46:41 compute-0 keen_swirles[86177]:     "1": [
Dec 05 09:46:41 compute-0 keen_swirles[86177]:         {
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             "devices": [
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "/dev/loop3"
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             ],
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             "lv_name": "ceph_lv0",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             "lv_size": "21470642176",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             "name": "ceph_lv0",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             "tags": {
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.cluster_name": "ceph",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.crush_device_class": "",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.encrypted": "0",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.osd_id": "1",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.type": "block",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.vdo": "0",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:                 "ceph.with_tpm": "0"
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             },
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             "type": "block",
Dec 05 09:46:41 compute-0 keen_swirles[86177]:             "vg_name": "ceph_vg0"
Dec 05 09:46:41 compute-0 keen_swirles[86177]:         }
Dec 05 09:46:41 compute-0 keen_swirles[86177]:     ]
Dec 05 09:46:41 compute-0 keen_swirles[86177]: }
Dec 05 09:46:41 compute-0 systemd[1]: libpod-2857656b728e5b252be52ae8cde3f5a9b9e4587b84c843251ced547c1613fe0c.scope: Deactivated successfully.
Dec 05 09:46:41 compute-0 podman[86145]: 2025-12-05 09:46:41.563552192 +0000 UTC m=+0.506266647 container died 2857656b728e5b252be52ae8cde3f5a9b9e4587b84c843251ced547c1613fe0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swirles, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 09:46:41 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4152731759' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 05 09:46:41 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4152731759' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 05 09:46:41 compute-0 ceph-mon[74418]: osdmap e23: 2 total, 2 up, 2 in
Dec 05 09:46:41 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2869993958' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d28ba23f-1aba-42a5-a51e-f784cf052fe9"}]: dispatch
Dec 05 09:46:41 compute-0 ceph-mon[74418]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d28ba23f-1aba-42a5-a51e-f784cf052fe9"}]: dispatch
Dec 05 09:46:41 compute-0 ceph-mon[74418]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d28ba23f-1aba-42a5-a51e-f784cf052fe9"}]': finished
Dec 05 09:46:41 compute-0 ceph-mon[74418]: osdmap e24: 3 total, 2 up, 3 in
Dec 05 09:46:41 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:41 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3120955414' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 05 09:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b694387eeacb254f9db1eba19fcfd82f1a1dab48893e0b83d7504fc327cc1a2d-merged.mount: Deactivated successfully.
Dec 05 09:46:41 compute-0 podman[86145]: 2025-12-05 09:46:41.620608079 +0000 UTC m=+0.563322544 container remove 2857656b728e5b252be52ae8cde3f5a9b9e4587b84c843251ced547c1613fe0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swirles, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 09:46:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 05 09:46:41 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2297978834' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 05 09:46:41 compute-0 systemd[1]: libpod-conmon-2857656b728e5b252be52ae8cde3f5a9b9e4587b84c843251ced547c1613fe0c.scope: Deactivated successfully.
Dec 05 09:46:41 compute-0 sudo[85999]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:41 compute-0 sudo[86221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:46:41 compute-0 sudo[86221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:46:41 compute-0 sudo[86221]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:41 compute-0 sudo[86246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:46:41 compute-0 sudo[86246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:46:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v100: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:42 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.wewrgp started
Dec 05 09:46:42 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mgr.compute-2.wewrgp 192.168.122.102:0/3988760260; not ready for session (expect reconnect)
Dec 05 09:46:42 compute-0 podman[86310]: 2025-12-05 09:46:42.325268076 +0000 UTC m=+0.031840987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:46:42 compute-0 podman[86310]: 2025-12-05 09:46:42.573046358 +0000 UTC m=+0.279619249 container create 0d8d493419bdaadd8cce03be638d2f5199e9fed23e94bbd03bfe99ae48a4717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ptolemy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:46:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 05 09:46:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2297978834' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 05 09:46:42 compute-0 ceph-mon[74418]: pgmap v100: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:42 compute-0 ceph-mon[74418]: Standby manager daemon compute-2.wewrgp started
Dec 05 09:46:42 compute-0 systemd[1]: Started libpod-conmon-0d8d493419bdaadd8cce03be638d2f5199e9fed23e94bbd03bfe99ae48a4717b.scope.
Dec 05 09:46:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2297978834' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 05 09:46:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Dec 05 09:46:42 compute-0 tender_khayyam[86179]: enabled application 'rbd' on pool 'backups'
Dec 05 09:46:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:42 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.hvnxai(active, since 2m), standbys: compute-2.wewrgp
Dec 05 09:46:42 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Dec 05 09:46:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:46:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"} v 0)
Dec 05 09:46:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"}]: dispatch
Dec 05 09:46:42 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:46:42 compute-0 podman[86310]: 2025-12-05 09:46:42.65331221 +0000 UTC m=+0.359885111 container init 0d8d493419bdaadd8cce03be638d2f5199e9fed23e94bbd03bfe99ae48a4717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 05 09:46:42 compute-0 podman[86310]: 2025-12-05 09:46:42.659545106 +0000 UTC m=+0.366117997 container start 0d8d493419bdaadd8cce03be638d2f5199e9fed23e94bbd03bfe99ae48a4717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 09:46:42 compute-0 practical_ptolemy[86327]: 167 167
Dec 05 09:46:42 compute-0 systemd[1]: libpod-0d8d493419bdaadd8cce03be638d2f5199e9fed23e94bbd03bfe99ae48a4717b.scope: Deactivated successfully.
Dec 05 09:46:42 compute-0 podman[86310]: 2025-12-05 09:46:42.665323289 +0000 UTC m=+0.371896180 container attach 0d8d493419bdaadd8cce03be638d2f5199e9fed23e94bbd03bfe99ae48a4717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ptolemy, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:46:42 compute-0 systemd[1]: libpod-6aa66037c25a25cc546aeefd1c593f8c07e636a9a369f6e9e1bc861af7437c7b.scope: Deactivated successfully.
Dec 05 09:46:42 compute-0 podman[86310]: 2025-12-05 09:46:42.665827133 +0000 UTC m=+0.372400024 container died 0d8d493419bdaadd8cce03be638d2f5199e9fed23e94bbd03bfe99ae48a4717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ptolemy, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 09:46:42 compute-0 podman[86147]: 2025-12-05 09:46:42.66606924 +0000 UTC m=+1.601727646 container died 6aa66037c25a25cc546aeefd1c593f8c07e636a9a369f6e9e1bc861af7437c7b (image=quay.io/ceph/ceph:v19, name=tender_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 09:46:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f98e6687fd50f8654e66f17e6fd6f28afeb7590efef77273111f6773deac0e08-merged.mount: Deactivated successfully.
Dec 05 09:46:42 compute-0 podman[86147]: 2025-12-05 09:46:42.713223868 +0000 UTC m=+1.648882244 container remove 6aa66037c25a25cc546aeefd1c593f8c07e636a9a369f6e9e1bc861af7437c7b (image=quay.io/ceph/ceph:v19, name=tender_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 09:46:42 compute-0 systemd[1]: libpod-conmon-6aa66037c25a25cc546aeefd1c593f8c07e636a9a369f6e9e1bc861af7437c7b.scope: Deactivated successfully.
Dec 05 09:46:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef4cbb25ea2e0c3912f4ffa0a3009cae3df79eb876357132fb0f8a3e715db5e5-merged.mount: Deactivated successfully.
Dec 05 09:46:42 compute-0 sudo[86122]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:42 compute-0 podman[86310]: 2025-12-05 09:46:42.74518845 +0000 UTC m=+0.451761341 container remove 0d8d493419bdaadd8cce03be638d2f5199e9fed23e94bbd03bfe99ae48a4717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:46:42 compute-0 systemd[1]: libpod-conmon-0d8d493419bdaadd8cce03be638d2f5199e9fed23e94bbd03bfe99ae48a4717b.scope: Deactivated successfully.
Dec 05 09:46:42 compute-0 sudo[86392]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xddebhqnwhgsrtviniohorstfwsroljx ; /usr/bin/python3'
Dec 05 09:46:42 compute-0 sudo[86392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:42 compute-0 podman[86388]: 2025-12-05 09:46:42.891011909 +0000 UTC m=+0.044578848 container create ff4d96be0451e54891c1cf7a4d9819355f22d866150ef4801ce1ce296faa0424 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:46:42 compute-0 systemd[1]: Started libpod-conmon-ff4d96be0451e54891c1cf7a4d9819355f22d866150ef4801ce1ce296faa0424.scope.
Dec 05 09:46:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aafe98f1cbe1afaa0934b8b56d564f2f52beeafd903ba274eca4896ae81e9dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aafe98f1cbe1afaa0934b8b56d564f2f52beeafd903ba274eca4896ae81e9dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aafe98f1cbe1afaa0934b8b56d564f2f52beeafd903ba274eca4896ae81e9dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aafe98f1cbe1afaa0934b8b56d564f2f52beeafd903ba274eca4896ae81e9dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:42 compute-0 podman[86388]: 2025-12-05 09:46:42.870470059 +0000 UTC m=+0.024037028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:46:42 compute-0 podman[86388]: 2025-12-05 09:46:42.971683902 +0000 UTC m=+0.125250841 container init ff4d96be0451e54891c1cf7a4d9819355f22d866150ef4801ce1ce296faa0424 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 09:46:42 compute-0 podman[86388]: 2025-12-05 09:46:42.983023812 +0000 UTC m=+0.136590771 container start ff4d96be0451e54891c1cf7a4d9819355f22d866150ef4801ce1ce296faa0424 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_saha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:46:42 compute-0 podman[86388]: 2025-12-05 09:46:42.987987321 +0000 UTC m=+0.141554390 container attach ff4d96be0451e54891c1cf7a4d9819355f22d866150ef4801ce1ce296faa0424 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:46:43 compute-0 python3[86401]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:43 compute-0 podman[86415]: 2025-12-05 09:46:43.078519383 +0000 UTC m=+0.036941822 container create 9dc461e3b07b6ff411dde3725df910d5067df8a79c8e2f183c3c879473e2f461 (image=quay.io/ceph/ceph:v19, name=youthful_black, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:46:43 compute-0 systemd[1]: Started libpod-conmon-9dc461e3b07b6ff411dde3725df910d5067df8a79c8e2f183c3c879473e2f461.scope.
Dec 05 09:46:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f25e595dd9ea79007a7a5a98755d6c5835812a25c4337f500b914fbca5f41a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f25e595dd9ea79007a7a5a98755d6c5835812a25c4337f500b914fbca5f41a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:43 compute-0 podman[86415]: 2025-12-05 09:46:43.155530922 +0000 UTC m=+0.113953381 container init 9dc461e3b07b6ff411dde3725df910d5067df8a79c8e2f183c3c879473e2f461 (image=quay.io/ceph/ceph:v19, name=youthful_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:46:43 compute-0 podman[86415]: 2025-12-05 09:46:43.06211916 +0000 UTC m=+0.020541619 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:43 compute-0 podman[86415]: 2025-12-05 09:46:43.162273963 +0000 UTC m=+0.120696422 container start 9dc461e3b07b6ff411dde3725df910d5067df8a79c8e2f183c3c879473e2f461 (image=quay.io/ceph/ceph:v19, name=youthful_black, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:46:43 compute-0 podman[86415]: 2025-12-05 09:46:43.16682339 +0000 UTC m=+0.125245849 container attach 9dc461e3b07b6ff411dde3725df910d5067df8a79c8e2f183c3c879473e2f461 (image=quay.io/ceph/ceph:v19, name=youthful_black, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:46:43 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 09:46:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:46:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 05 09:46:43 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3155403351' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 05 09:46:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 05 09:46:43 compute-0 lvm[86524]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:46:43 compute-0 lvm[86524]: VG ceph_vg0 finished
Dec 05 09:46:43 compute-0 strange_saha[86410]: {}
Dec 05 09:46:43 compute-0 systemd[1]: libpod-ff4d96be0451e54891c1cf7a4d9819355f22d866150ef4801ce1ce296faa0424.scope: Deactivated successfully.
Dec 05 09:46:43 compute-0 systemd[1]: libpod-ff4d96be0451e54891c1cf7a4d9819355f22d866150ef4801ce1ce296faa0424.scope: Consumed 1.172s CPU time.
Dec 05 09:46:43 compute-0 podman[86388]: 2025-12-05 09:46:43.772483627 +0000 UTC m=+0.926050596 container died ff4d96be0451e54891c1cf7a4d9819355f22d866150ef4801ce1ce296faa0424 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:46:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aafe98f1cbe1afaa0934b8b56d564f2f52beeafd903ba274eca4896ae81e9dc-merged.mount: Deactivated successfully.
Dec 05 09:46:43 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3155403351' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 05 09:46:43 compute-0 podman[86388]: 2025-12-05 09:46:43.861118995 +0000 UTC m=+1.014685934 container remove ff4d96be0451e54891c1cf7a4d9819355f22d866150ef4801ce1ce296faa0424 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_saha, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:46:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Dec 05 09:46:43 compute-0 youthful_black[86430]: enabled application 'rbd' on pool 'images'
Dec 05 09:46:43 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Dec 05 09:46:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:46:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:43 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:46:43 compute-0 systemd[1]: libpod-conmon-ff4d96be0451e54891c1cf7a4d9819355f22d866150ef4801ce1ce296faa0424.scope: Deactivated successfully.
Dec 05 09:46:43 compute-0 systemd[1]: libpod-9dc461e3b07b6ff411dde3725df910d5067df8a79c8e2f183c3c879473e2f461.scope: Deactivated successfully.
Dec 05 09:46:43 compute-0 podman[86415]: 2025-12-05 09:46:43.883530167 +0000 UTC m=+0.841952596 container died 9dc461e3b07b6ff411dde3725df910d5067df8a79c8e2f183c3c879473e2f461 (image=quay.io/ceph/ceph:v19, name=youthful_black, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 09:46:43 compute-0 sudo[86246]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3f25e595dd9ea79007a7a5a98755d6c5835812a25c4337f500b914fbca5f41a-merged.mount: Deactivated successfully.
Dec 05 09:46:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:46:43 compute-0 podman[86415]: 2025-12-05 09:46:43.925822738 +0000 UTC m=+0.884245187 container remove 9dc461e3b07b6ff411dde3725df910d5067df8a79c8e2f183c3c879473e2f461 (image=quay.io/ceph/ceph:v19, name=youthful_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec 05 09:46:43 compute-0 systemd[1]: libpod-conmon-9dc461e3b07b6ff411dde3725df910d5067df8a79c8e2f183c3c879473e2f461.scope: Deactivated successfully.
Dec 05 09:46:43 compute-0 sudo[86392]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:44 compute-0 sudo[86576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpvgftscanfwkzgivejuykyhjdpwqqxn ; /usr/bin/python3'
Dec 05 09:46:44 compute-0 sudo[86576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v103: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:44 compute-0 python3[86578]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:44 compute-0 podman[86579]: 2025-12-05 09:46:44.330176853 +0000 UTC m=+0.064867919 container create a1f207ceac1ad9a4a79dac3121853b3955690ca41ff8565ff87b6522be5224ba (image=quay.io/ceph/ceph:v19, name=condescending_kapitsa, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 09:46:44 compute-0 systemd[1]: Started libpod-conmon-a1f207ceac1ad9a4a79dac3121853b3955690ca41ff8565ff87b6522be5224ba.scope.
Dec 05 09:46:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a433fcf196e09259154151d6a63c1e33c7a41f75a1df96ebfff52b056cbb68/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a433fcf196e09259154151d6a63c1e33c7a41f75a1df96ebfff52b056cbb68/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:44 compute-0 podman[86579]: 2025-12-05 09:46:44.308825361 +0000 UTC m=+0.043516457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:44 compute-0 podman[86579]: 2025-12-05 09:46:44.410793814 +0000 UTC m=+0.145484900 container init a1f207ceac1ad9a4a79dac3121853b3955690ca41ff8565ff87b6522be5224ba (image=quay.io/ceph/ceph:v19, name=condescending_kapitsa, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Dec 05 09:46:44 compute-0 podman[86579]: 2025-12-05 09:46:44.41737747 +0000 UTC m=+0.152068526 container start a1f207ceac1ad9a4a79dac3121853b3955690ca41ff8565ff87b6522be5224ba (image=quay.io/ceph/ceph:v19, name=condescending_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Dec 05 09:46:44 compute-0 podman[86579]: 2025-12-05 09:46:44.421144236 +0000 UTC m=+0.155835332 container attach a1f207ceac1ad9a4a79dac3121853b3955690ca41ff8565ff87b6522be5224ba (image=quay.io/ceph/ceph:v19, name=condescending_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 05 09:46:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2297978834' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 05 09:46:44 compute-0 ceph-mon[74418]: mgrmap e10: compute-0.hvnxai(active, since 2m), standbys: compute-2.wewrgp
Dec 05 09:46:44 compute-0 ceph-mon[74418]: osdmap e25: 3 total, 2 up, 3 in
Dec 05 09:46:44 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:44 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"}]: dispatch
Dec 05 09:46:44 compute-0 ceph-mon[74418]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 09:46:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3155403351' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 05 09:46:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 05 09:46:44 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3501390752' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 05 09:46:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:46:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 05 09:46:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3501390752' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 05 09:46:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Dec 05 09:46:45 compute-0 condescending_kapitsa[86594]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 05 09:46:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3155403351' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 05 09:46:45 compute-0 ceph-mon[74418]: osdmap e26: 3 total, 2 up, 3 in
Dec 05 09:46:45 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:45 compute-0 ceph-mon[74418]: pgmap v103: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3501390752' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 05 09:46:45 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:45 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Dec 05 09:46:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:46:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:45 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:46:45 compute-0 systemd[1]: libpod-a1f207ceac1ad9a4a79dac3121853b3955690ca41ff8565ff87b6522be5224ba.scope: Deactivated successfully.
Dec 05 09:46:45 compute-0 podman[86579]: 2025-12-05 09:46:45.798016586 +0000 UTC m=+1.532707662 container died a1f207ceac1ad9a4a79dac3121853b3955690ca41ff8565ff87b6522be5224ba (image=quay.io/ceph/ceph:v19, name=condescending_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:46:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8a433fcf196e09259154151d6a63c1e33c7a41f75a1df96ebfff52b056cbb68-merged.mount: Deactivated successfully.
Dec 05 09:46:45 compute-0 podman[86579]: 2025-12-05 09:46:45.838100345 +0000 UTC m=+1.572791411 container remove a1f207ceac1ad9a4a79dac3121853b3955690ca41ff8565ff87b6522be5224ba (image=quay.io/ceph/ceph:v19, name=condescending_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 09:46:45 compute-0 systemd[1]: libpod-conmon-a1f207ceac1ad9a4a79dac3121853b3955690ca41ff8565ff87b6522be5224ba.scope: Deactivated successfully.
Dec 05 09:46:45 compute-0 sudo[86576]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:45 compute-0 sudo[86655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqsssrwxsyseunatfgpjvbdthwvqiaje ; /usr/bin/python3'
Dec 05 09:46:45 compute-0 sudo[86655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:46 compute-0 python3[86657]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v105: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:46 compute-0 podman[86658]: 2025-12-05 09:46:46.208977885 +0000 UTC m=+0.047797257 container create 602c2129e28cb7012eb16bc013f54e81955c3552d212700774cd0d744002c13a (image=quay.io/ceph/ceph:v19, name=intelligent_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 09:46:46 compute-0 systemd[1]: Started libpod-conmon-602c2129e28cb7012eb16bc013f54e81955c3552d212700774cd0d744002c13a.scope.
Dec 05 09:46:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9428745649eb95b56023bbc935471f47f525f8aacd78ba2d0d1773b679ab5fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9428745649eb95b56023bbc935471f47f525f8aacd78ba2d0d1773b679ab5fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:46 compute-0 podman[86658]: 2025-12-05 09:46:46.18715244 +0000 UTC m=+0.025971802 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:46 compute-0 podman[86658]: 2025-12-05 09:46:46.286608213 +0000 UTC m=+0.125427555 container init 602c2129e28cb7012eb16bc013f54e81955c3552d212700774cd0d744002c13a (image=quay.io/ceph/ceph:v19, name=intelligent_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:46:46 compute-0 podman[86658]: 2025-12-05 09:46:46.293686152 +0000 UTC m=+0.132505494 container start 602c2129e28cb7012eb16bc013f54e81955c3552d212700774cd0d744002c13a (image=quay.io/ceph/ceph:v19, name=intelligent_wu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:46:46 compute-0 podman[86658]: 2025-12-05 09:46:46.297182402 +0000 UTC m=+0.136001744 container attach 602c2129e28cb7012eb16bc013f54e81955c3552d212700774cd0d744002c13a (image=quay.io/ceph/ceph:v19, name=intelligent_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:46:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 05 09:46:46 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3398540181' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 05 09:46:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 05 09:46:46 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3398540181' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 05 09:46:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Dec 05 09:46:46 compute-0 intelligent_wu[86673]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 05 09:46:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3501390752' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 05 09:46:46 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:46 compute-0 ceph-mon[74418]: osdmap e27: 3 total, 2 up, 3 in
Dec 05 09:46:46 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:46 compute-0 ceph-mon[74418]: pgmap v105: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3398540181' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 05 09:46:46 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Dec 05 09:46:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:46:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:46 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:46:46 compute-0 systemd[1]: libpod-602c2129e28cb7012eb16bc013f54e81955c3552d212700774cd0d744002c13a.scope: Deactivated successfully.
Dec 05 09:46:46 compute-0 conmon[86673]: conmon 602c2129e28cb7012eb1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-602c2129e28cb7012eb16bc013f54e81955c3552d212700774cd0d744002c13a.scope/container/memory.events
Dec 05 09:46:46 compute-0 podman[86658]: 2025-12-05 09:46:46.818825931 +0000 UTC m=+0.657645283 container died 602c2129e28cb7012eb16bc013f54e81955c3552d212700774cd0d744002c13a (image=quay.io/ceph/ceph:v19, name=intelligent_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Dec 05 09:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9428745649eb95b56023bbc935471f47f525f8aacd78ba2d0d1773b679ab5fc-merged.mount: Deactivated successfully.
Dec 05 09:46:46 compute-0 podman[86658]: 2025-12-05 09:46:46.85892483 +0000 UTC m=+0.697744212 container remove 602c2129e28cb7012eb16bc013f54e81955c3552d212700774cd0d744002c13a (image=quay.io/ceph/ceph:v19, name=intelligent_wu, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 09:46:46 compute-0 systemd[1]: libpod-conmon-602c2129e28cb7012eb16bc013f54e81955c3552d212700774cd0d744002c13a.scope: Deactivated successfully.
Dec 05 09:46:46 compute-0 sudo[86655]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 05 09:46:47 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 05 09:46:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:46:47 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:47 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Dec 05 09:46:47 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Dec 05 09:46:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3398540181' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 05 09:46:47 compute-0 ceph-mon[74418]: osdmap e28: 3 total, 2 up, 3 in
Dec 05 09:46:47 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:47 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 05 09:46:47 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:46:47 compute-0 ceph-mon[74418]: Deploying daemon osd.2 on compute-2
Dec 05 09:46:47 compute-0 python3[86785]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:46:48 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unhddt started
Dec 05 09:46:48 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from mgr.compute-1.unhddt 192.168.122.101:0/889354599; not ready for session (expect reconnect)
Dec 05 09:46:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v107: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:48 compute-0 python3[86856]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764928007.5987523-37239-56928613778661/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:48 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 09:46:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:46:48 compute-0 sudo[86956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kruzwesqfniyjhdbexorgcdflcdsrsqe ; /usr/bin/python3'
Dec 05 09:46:48 compute-0 sudo[86956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:48 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 05 09:46:48 compute-0 ceph-mon[74418]: Standby manager daemon compute-1.unhddt started
Dec 05 09:46:48 compute-0 ceph-mon[74418]: pgmap v107: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:48 compute-0 ceph-mon[74418]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 09:46:48 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hvnxai(active, since 2m), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:46:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"} v 0)
Dec 05 09:46:48 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"}]: dispatch
Dec 05 09:46:48 compute-0 python3[86958]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:46:48 compute-0 sudo[86956]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:49 compute-0 sudo[87031]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxnhmzefjcuvbdovainexhhhuadyynae ; /usr/bin/python3'
Dec 05 09:46:49 compute-0 sudo[87031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:49 compute-0 python3[87033]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764928008.552175-37253-8027840963287/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=e5f29cb49ad9c4b27c3b333d0c8a2c2ca6a22209 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:49 compute-0 sudo[87031]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:49 compute-0 sudo[87081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruoeikostkhjzrjlpdbpwkzlqebafzkf ; /usr/bin/python3'
Dec 05 09:46:49 compute-0 sudo[87081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:49 compute-0 python3[87083]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:49 compute-0 podman[87084]: 2025-12-05 09:46:49.730847139 +0000 UTC m=+0.051496282 container create 370eb19c53c2d8f02c1a56267bc65f1e80d6e6a70f077d0b330cdaf618011072 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:46:49 compute-0 systemd[1]: Started libpod-conmon-370eb19c53c2d8f02c1a56267bc65f1e80d6e6a70f077d0b330cdaf618011072.scope.
Dec 05 09:46:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9522aa57cfe4e6e649b1f4d0501f6cb649be5d66ab74376e0e68848296490108/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9522aa57cfe4e6e649b1f4d0501f6cb649be5d66ab74376e0e68848296490108/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9522aa57cfe4e6e649b1f4d0501f6cb649be5d66ab74376e0e68848296490108/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:49 compute-0 podman[87084]: 2025-12-05 09:46:49.798517366 +0000 UTC m=+0.119166499 container init 370eb19c53c2d8f02c1a56267bc65f1e80d6e6a70f077d0b330cdaf618011072 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 05 09:46:49 compute-0 podman[87084]: 2025-12-05 09:46:49.709814406 +0000 UTC m=+0.030463559 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:49 compute-0 podman[87084]: 2025-12-05 09:46:49.806002007 +0000 UTC m=+0.126651170 container start 370eb19c53c2d8f02c1a56267bc65f1e80d6e6a70f077d0b330cdaf618011072 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 05 09:46:49 compute-0 podman[87084]: 2025-12-05 09:46:49.811190143 +0000 UTC m=+0.131839296 container attach 370eb19c53c2d8f02c1a56267bc65f1e80d6e6a70f077d0b330cdaf618011072 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 09:46:49 compute-0 ceph-mon[74418]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 05 09:46:49 compute-0 ceph-mon[74418]: mgrmap e11: compute-0.hvnxai(active, since 2m), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:46:49 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"}]: dispatch
Dec 05 09:46:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v108: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 05 09:46:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2776674545' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 09:46:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2776674545' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 05 09:46:50 compute-0 kind_jepsen[87099]: 
Dec 05 09:46:50 compute-0 kind_jepsen[87099]: [global]
Dec 05 09:46:50 compute-0 kind_jepsen[87099]:         fsid = 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:46:50 compute-0 kind_jepsen[87099]:         mon_host = 192.168.122.100
Dec 05 09:46:50 compute-0 systemd[1]: libpod-370eb19c53c2d8f02c1a56267bc65f1e80d6e6a70f077d0b330cdaf618011072.scope: Deactivated successfully.
Dec 05 09:46:50 compute-0 podman[87084]: 2025-12-05 09:46:50.291128877 +0000 UTC m=+0.611778050 container died 370eb19c53c2d8f02c1a56267bc65f1e80d6e6a70f077d0b330cdaf618011072 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 09:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9522aa57cfe4e6e649b1f4d0501f6cb649be5d66ab74376e0e68848296490108-merged.mount: Deactivated successfully.
Dec 05 09:46:50 compute-0 podman[87084]: 2025-12-05 09:46:50.339406018 +0000 UTC m=+0.660055151 container remove 370eb19c53c2d8f02c1a56267bc65f1e80d6e6a70f077d0b330cdaf618011072 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:46:50 compute-0 systemd[1]: libpod-conmon-370eb19c53c2d8f02c1a56267bc65f1e80d6e6a70f077d0b330cdaf618011072.scope: Deactivated successfully.
Dec 05 09:46:50 compute-0 sudo[87081]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:50 compute-0 sudo[87159]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvuiosxawnhkyveswokzyccsetdfxsdm ; /usr/bin/python3'
Dec 05 09:46:50 compute-0 sudo[87159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:50 compute-0 python3[87161]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:50 compute-0 podman[87162]: 2025-12-05 09:46:50.744922434 +0000 UTC m=+0.066874014 container create b71b5e91c7a30d24f0cf2e51b468a2bacc09254d6530b656aec5ec445967ae17 (image=quay.io/ceph/ceph:v19, name=distracted_goodall, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:46:50 compute-0 systemd[1]: Started libpod-conmon-b71b5e91c7a30d24f0cf2e51b468a2bacc09254d6530b656aec5ec445967ae17.scope.
Dec 05 09:46:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83056c7b635f265ef61fd2f1d69b6be589462789e2c9f1db2c18cb8f832ae3bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83056c7b635f265ef61fd2f1d69b6be589462789e2c9f1db2c18cb8f832ae3bc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83056c7b635f265ef61fd2f1d69b6be589462789e2c9f1db2c18cb8f832ae3bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:50 compute-0 podman[87162]: 2025-12-05 09:46:50.726377462 +0000 UTC m=+0.048329062 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:50 compute-0 podman[87162]: 2025-12-05 09:46:50.824637651 +0000 UTC m=+0.146589301 container init b71b5e91c7a30d24f0cf2e51b468a2bacc09254d6530b656aec5ec445967ae17 (image=quay.io/ceph/ceph:v19, name=distracted_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:46:50 compute-0 podman[87162]: 2025-12-05 09:46:50.831748181 +0000 UTC m=+0.153699791 container start b71b5e91c7a30d24f0cf2e51b468a2bacc09254d6530b656aec5ec445967ae17 (image=quay.io/ceph/ceph:v19, name=distracted_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 05 09:46:50 compute-0 ceph-mon[74418]: pgmap v108: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:50 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2776674545' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 09:46:50 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2776674545' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 05 09:46:50 compute-0 podman[87162]: 2025-12-05 09:46:50.835621691 +0000 UTC m=+0.157573301 container attach b71b5e91c7a30d24f0cf2e51b468a2bacc09254d6530b656aec5ec445967ae17 (image=quay.io/ceph/ceph:v19, name=distracted_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 09:46:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec 05 09:46:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/496895082' entity='client.admin' 
Dec 05 09:46:51 compute-0 distracted_goodall[87178]: set ssl_option
Dec 05 09:46:51 compute-0 systemd[1]: libpod-b71b5e91c7a30d24f0cf2e51b468a2bacc09254d6530b656aec5ec445967ae17.scope: Deactivated successfully.
Dec 05 09:46:51 compute-0 podman[87162]: 2025-12-05 09:46:51.364678089 +0000 UTC m=+0.686629659 container died b71b5e91c7a30d24f0cf2e51b468a2bacc09254d6530b656aec5ec445967ae17 (image=quay.io/ceph/ceph:v19, name=distracted_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 05 09:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-83056c7b635f265ef61fd2f1d69b6be589462789e2c9f1db2c18cb8f832ae3bc-merged.mount: Deactivated successfully.
Dec 05 09:46:51 compute-0 podman[87162]: 2025-12-05 09:46:51.402898165 +0000 UTC m=+0.724849725 container remove b71b5e91c7a30d24f0cf2e51b468a2bacc09254d6530b656aec5ec445967ae17 (image=quay.io/ceph/ceph:v19, name=distracted_goodall, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 09:46:51 compute-0 systemd[1]: libpod-conmon-b71b5e91c7a30d24f0cf2e51b468a2bacc09254d6530b656aec5ec445967ae17.scope: Deactivated successfully.
Dec 05 09:46:51 compute-0 sudo[87159]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:51 compute-0 sudo[87238]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksiknxcpubsnkyblgzbyisnzmvrqxawc ; /usr/bin/python3'
Dec 05 09:46:51 compute-0 sudo[87238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:51 compute-0 python3[87240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:51 compute-0 podman[87241]: 2025-12-05 09:46:51.809791682 +0000 UTC m=+0.045519105 container create df55a20322d8db81418040ddceb62279fe7260e9e82001ccd1bd67111e71c26a (image=quay.io/ceph/ceph:v19, name=sad_kalam, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 05 09:46:51 compute-0 systemd[1]: Started libpod-conmon-df55a20322d8db81418040ddceb62279fe7260e9e82001ccd1bd67111e71c26a.scope.
Dec 05 09:46:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7d75ebe7adb4df5180469c506be0585a8b58fdc670a8d0b140e1110fa0f0c0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7d75ebe7adb4df5180469c506be0585a8b58fdc670a8d0b140e1110fa0f0c0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7d75ebe7adb4df5180469c506be0585a8b58fdc670a8d0b140e1110fa0f0c0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:51 compute-0 podman[87241]: 2025-12-05 09:46:51.788917293 +0000 UTC m=+0.024644766 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:51 compute-0 podman[87241]: 2025-12-05 09:46:51.892302386 +0000 UTC m=+0.128029839 container init df55a20322d8db81418040ddceb62279fe7260e9e82001ccd1bd67111e71c26a (image=quay.io/ceph/ceph:v19, name=sad_kalam, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 09:46:51 compute-0 podman[87241]: 2025-12-05 09:46:51.899788928 +0000 UTC m=+0.135516351 container start df55a20322d8db81418040ddceb62279fe7260e9e82001ccd1bd67111e71c26a (image=quay.io/ceph/ceph:v19, name=sad_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:46:51 compute-0 podman[87241]: 2025-12-05 09:46:51.90413181 +0000 UTC m=+0.139859243 container attach df55a20322d8db81418040ddceb62279fe7260e9e82001ccd1bd67111e71c26a (image=quay.io/ceph/ceph:v19, name=sad_kalam, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:46:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:46:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:46:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v109: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:52 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14277 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:46:52 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 05 09:46:52 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 05 09:46:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 05 09:46:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:52 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Dec 05 09:46:52 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Dec 05 09:46:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 05 09:46:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/496895082' entity='client.admin' 
Dec 05 09:46:52 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:52 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:52 compute-0 sad_kalam[87256]: Scheduled rgw.rgw update...
Dec 05 09:46:52 compute-0 sad_kalam[87256]: Scheduled ingress.rgw.default update...
Dec 05 09:46:52 compute-0 systemd[1]: libpod-df55a20322d8db81418040ddceb62279fe7260e9e82001ccd1bd67111e71c26a.scope: Deactivated successfully.
Dec 05 09:46:52 compute-0 conmon[87256]: conmon df55a20322d8db814180 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df55a20322d8db81418040ddceb62279fe7260e9e82001ccd1bd67111e71c26a.scope/container/memory.events
Dec 05 09:46:52 compute-0 podman[87241]: 2025-12-05 09:46:52.408755699 +0000 UTC m=+0.644483122 container died df55a20322d8db81418040ddceb62279fe7260e9e82001ccd1bd67111e71c26a (image=quay.io/ceph/ceph:v19, name=sad_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Dec 05 09:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-de7d75ebe7adb4df5180469c506be0585a8b58fdc670a8d0b140e1110fa0f0c0-merged.mount: Deactivated successfully.
Dec 05 09:46:52 compute-0 podman[87241]: 2025-12-05 09:46:52.447636365 +0000 UTC m=+0.683363808 container remove df55a20322d8db81418040ddceb62279fe7260e9e82001ccd1bd67111e71c26a (image=quay.io/ceph/ceph:v19, name=sad_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 09:46:52 compute-0 systemd[1]: libpod-conmon-df55a20322d8db81418040ddceb62279fe7260e9e82001ccd1bd67111e71c26a.scope: Deactivated successfully.
Dec 05 09:46:52 compute-0 sudo[87238]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:52 compute-0 python3[87368]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:46:53 compute-0 python3[87439]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764928012.630735-37272-215480249368916/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:53 compute-0 sudo[87487]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtwtmeqofqsnwdnemeghcakcoxpflsfi ; /usr/bin/python3'
Dec 05 09:46:53 compute-0 sudo[87487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v110: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:46:55
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'images', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta']
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:46:55 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v111: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:46:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:46:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 05 09:46:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 05 09:46:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 05 09:46:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:46:56 compute-0 ceph-mon[74418]: pgmap v109: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:56 compute-0 ceph-mon[74418]: from='client.14277 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:46:56 compute-0 ceph-mon[74418]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 05 09:46:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:56 compute-0 ceph-mon[74418]: Saving service ingress.rgw.default spec with placement count:2
Dec 05 09:46:56 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:56 compute-0 python3[87489]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 05 09:46:56 compute-0 podman[87490]: 2025-12-05 09:46:56.612054414 +0000 UTC m=+0.024306496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:46:56 compute-0 podman[87490]: 2025-12-05 09:46:56.970359471 +0000 UTC m=+0.382611533 container create 789140981c3e65146f9031461608dffac596cf600eff64ab13ee08dec0743353 (image=quay.io/ceph/ceph:v19, name=angry_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 05 09:46:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 05 09:46:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:46:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Dec 05 09:46:56 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Dec 05 09:46:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Dec 05 09:46:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 05 09:46:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e29 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Dec 05 09:46:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:46:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:46:56 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 51c14f0f-1b4d-4ba4-a0f8-d067b0097ba4 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 05 09:46:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 05 09:46:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:46:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:57 compute-0 systemd[1]: Started libpod-conmon-789140981c3e65146f9031461608dffac596cf600eff64ab13ee08dec0743353.scope.
Dec 05 09:46:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04cb9e46d743878048d1cc3f63aa267e13d68f64ac49224bba537875cbb99187/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04cb9e46d743878048d1cc3f63aa267e13d68f64ac49224bba537875cbb99187/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04cb9e46d743878048d1cc3f63aa267e13d68f64ac49224bba537875cbb99187/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:57 compute-0 podman[87490]: 2025-12-05 09:46:57.088279404 +0000 UTC m=+0.500531486 container init 789140981c3e65146f9031461608dffac596cf600eff64ab13ee08dec0743353 (image=quay.io/ceph/ceph:v19, name=angry_zhukovsky, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:46:57 compute-0 podman[87490]: 2025-12-05 09:46:57.098361548 +0000 UTC m=+0.510613610 container start 789140981c3e65146f9031461608dffac596cf600eff64ab13ee08dec0743353 (image=quay.io/ceph/ceph:v19, name=angry_zhukovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 05 09:46:57 compute-0 podman[87490]: 2025-12-05 09:46:57.113713801 +0000 UTC m=+0.525965893 container attach 789140981c3e65146f9031461608dffac596cf600eff64ab13ee08dec0743353 (image=quay.io/ceph/ceph:v19, name=angry_zhukovsky, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Dec 05 09:46:57 compute-0 sudo[87508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:46:57 compute-0 sudo[87508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:46:57 compute-0 sudo[87508]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:57 compute-0 sudo[87553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:46:57 compute-0 sudo[87553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:46:57 compute-0 sudo[87553]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14283 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:46:57 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service node-exporter spec with placement *
Dec 05 09:46:57 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Dec 05 09:46:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 05 09:46:57 compute-0 sudo[87578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:46:57 compute-0 sudo[87578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:46:57 compute-0 sudo[87578]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v113: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 05 09:46:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:46:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 05 09:46:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:58 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Dec 05 09:46:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Dec 05 09:46:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 05 09:46:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 05 09:46:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:46:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:46:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Dec 05 09:46:58 compute-0 ceph-mon[74418]: pgmap v110: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='osd.2 [v2:192.168.122.102:6800/1148467787,v1:192.168.122.102:6801/1148467787]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mon[74418]: purged_snaps scrub starts
Dec 05 09:46:58 compute-0 ceph-mon[74418]: purged_snaps scrub ok
Dec 05 09:46:58 compute-0 ceph-mon[74418]: pgmap v111: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='osd.2 [v2:192.168.122.102:6800/1148467787,v1:192.168.122.102:6801/1148467787]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mon[74418]: osdmap e29: 3 total, 2 up, 3 in
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='client.14283 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mon[74418]: Saving service node-exporter spec with placement *
Dec 05 09:46:58 compute-0 ceph-mon[74418]: pgmap v113: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:46:58 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:46:58 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 30 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=30 pruub=10.725196838s) [] r=-1 lpr=30 pi=[16,30)/1 crt=0'0 mlcod 0'0 active pruub 92.261352539s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:46:58 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 30 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=30 pruub=8.573919296s) [] r=-1 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 active pruub 90.110107422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:46:58 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 30 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=30 pruub=10.725196838s) [] r=-1 lpr=30 pi=[16,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.261352539s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:46:58 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 30 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=30 pruub=8.573919296s) [] r=-1 lpr=30 pi=[14,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.110107422s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:46:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:58 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Dec 05 09:46:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:46:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:46:58 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev ee43a326-c5a1-49fc-a694-35ba932dbb2a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 05 09:46:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:46:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 05 09:46:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:58 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Dec 05 09:46:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Dec 05 09:46:58 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1148467787; not ready for session (expect reconnect)
Dec 05 09:46:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 05 09:46:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:46:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:58 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:46:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Dec 05 09:46:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Dec 05 09:46:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 05 09:46:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 angry_zhukovsky[87505]: Scheduled node-exporter update...
Dec 05 09:46:59 compute-0 angry_zhukovsky[87505]: Scheduled grafana update...
Dec 05 09:46:59 compute-0 angry_zhukovsky[87505]: Scheduled prometheus update...
Dec 05 09:46:59 compute-0 angry_zhukovsky[87505]: Scheduled alertmanager update...
Dec 05 09:46:59 compute-0 systemd[1]: libpod-789140981c3e65146f9031461608dffac596cf600eff64ab13ee08dec0743353.scope: Deactivated successfully.
Dec 05 09:46:59 compute-0 podman[87490]: 2025-12-05 09:46:59.103743028 +0000 UTC m=+2.515995100 container died 789140981c3e65146f9031461608dffac596cf600eff64ab13ee08dec0743353 (image=quay.io/ceph/ceph:v19, name=angry_zhukovsky, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:46:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-04cb9e46d743878048d1cc3f63aa267e13d68f64ac49224bba537875cbb99187-merged.mount: Deactivated successfully.
Dec 05 09:46:59 compute-0 podman[87490]: 2025-12-05 09:46:59.153336195 +0000 UTC m=+2.565588257 container remove 789140981c3e65146f9031461608dffac596cf600eff64ab13ee08dec0743353 (image=quay.io/ceph/ceph:v19, name=angry_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 09:46:59 compute-0 systemd[1]: libpod-conmon-789140981c3e65146f9031461608dffac596cf600eff64ab13ee08dec0743353.scope: Deactivated successfully.
Dec 05 09:46:59 compute-0 sudo[87487]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:59 compute-0 ceph-mgr[74711]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Dec 05 09:46:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:46:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:46:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 sudo[87671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksgimzrttqedwcwgfllpcyrkzhplximr ; /usr/bin/python3'
Dec 05 09:46:59 compute-0 sudo[87671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:59 compute-0 python3[87673]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:59 compute-0 podman[87674]: 2025-12-05 09:46:59.84727043 +0000 UTC m=+0.039263587 container create 928280628b1564c5df878d9d402c9e459406feb140f74a587bd1219fe7f374b1 (image=quay.io/ceph/ceph:v19, name=confident_hawking, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:46:59 compute-0 systemd[1]: Started libpod-conmon-928280628b1564c5df878d9d402c9e459406feb140f74a587bd1219fe7f374b1.scope.
Dec 05 09:46:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55204f235bbcd0e1550bbe9df7b7c253b5d1a6105896eda299d934cb66ea2ae3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55204f235bbcd0e1550bbe9df7b7c253b5d1a6105896eda299d934cb66ea2ae3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55204f235bbcd0e1550bbe9df7b7c253b5d1a6105896eda299d934cb66ea2ae3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:59 compute-0 podman[87674]: 2025-12-05 09:46:59.923026265 +0000 UTC m=+0.115019422 container init 928280628b1564c5df878d9d402c9e459406feb140f74a587bd1219fe7f374b1 (image=quay.io/ceph/ceph:v19, name=confident_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:46:59 compute-0 podman[87674]: 2025-12-05 09:46:59.829611662 +0000 UTC m=+0.021604849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:46:59 compute-0 podman[87674]: 2025-12-05 09:46:59.928980302 +0000 UTC m=+0.120973459 container start 928280628b1564c5df878d9d402c9e459406feb140f74a587bd1219fe7f374b1 (image=quay.io/ceph/ceph:v19, name=confident_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:46:59 compute-0 podman[87674]: 2025-12-05 09:46:59.932768209 +0000 UTC m=+0.124761396 container attach 928280628b1564c5df878d9d402c9e459406feb140f74a587bd1219fe7f374b1 (image=quay.io/ceph/ceph:v19, name=confident_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 05 09:46:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 05 09:46:59 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1148467787; not ready for session (expect reconnect)
Dec 05 09:46:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:46:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:59 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 ceph-mon[74418]: Saving service grafana spec with placement compute-0;count:1
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 ceph-mon[74418]: osdmap e30: 3 total, 2 up, 3 in
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 ceph-mon[74418]: Saving service prometheus spec with placement compute-0;count:1
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 ceph-mon[74418]: Saving service alertmanager spec with placement compute-0;count:1
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:46:59 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:47:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Dec 05 09:47:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Dec 05 09:47:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:47:00 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:00 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:47:00 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 397ff74d-2e48-4235-9d7d-73e0dad58da9 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 05 09:47:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 05 09:47:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:47:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v116: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:47:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 05 09:47:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 05 09:47:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Dec 05 09:47:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/439154004' entity='client.admin' 
Dec 05 09:47:00 compute-0 systemd[1]: libpod-928280628b1564c5df878d9d402c9e459406feb140f74a587bd1219fe7f374b1.scope: Deactivated successfully.
Dec 05 09:47:00 compute-0 podman[87674]: 2025-12-05 09:47:00.298196987 +0000 UTC m=+0.490190154 container died 928280628b1564c5df878d9d402c9e459406feb140f74a587bd1219fe7f374b1 (image=quay.io/ceph/ceph:v19, name=confident_hawking, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 09:47:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-55204f235bbcd0e1550bbe9df7b7c253b5d1a6105896eda299d934cb66ea2ae3-merged.mount: Deactivated successfully.
Dec 05 09:47:00 compute-0 podman[87674]: 2025-12-05 09:47:00.341419964 +0000 UTC m=+0.533413161 container remove 928280628b1564c5df878d9d402c9e459406feb140f74a587bd1219fe7f374b1 (image=quay.io/ceph/ceph:v19, name=confident_hawking, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:47:00 compute-0 systemd[1]: libpod-conmon-928280628b1564c5df878d9d402c9e459406feb140f74a587bd1219fe7f374b1.scope: Deactivated successfully.
Dec 05 09:47:00 compute-0 sudo[87671]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:00 compute-0 sudo[87749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skkfvfmddtywxjpilqieihsgftliwgxo ; /usr/bin/python3'
Dec 05 09:47:00 compute-0 sudo[87749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:00 compute-0 python3[87751]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:00 compute-0 podman[87752]: 2025-12-05 09:47:00.725173248 +0000 UTC m=+0.052304105 container create a3d319da62ff4fd6e39977d3010e8b2693aafce26d84ddf0c22e89ee1b4ca43b (image=quay.io/ceph/ceph:v19, name=loving_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 05 09:47:00 compute-0 systemd[1]: Started libpod-conmon-a3d319da62ff4fd6e39977d3010e8b2693aafce26d84ddf0c22e89ee1b4ca43b.scope.
Dec 05 09:47:00 compute-0 podman[87752]: 2025-12-05 09:47:00.694898335 +0000 UTC m=+0.022029202 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb6d5b146c6636ca89784aba751a8f1dcf4822d088da001b5b5f6ca3d807cbc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb6d5b146c6636ca89784aba751a8f1dcf4822d088da001b5b5f6ca3d807cbc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb6d5b146c6636ca89784aba751a8f1dcf4822d088da001b5b5f6ca3d807cbc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:00 compute-0 podman[87752]: 2025-12-05 09:47:00.81145396 +0000 UTC m=+0.138584827 container init a3d319da62ff4fd6e39977d3010e8b2693aafce26d84ddf0c22e89ee1b4ca43b (image=quay.io/ceph/ceph:v19, name=loving_wu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:00 compute-0 podman[87752]: 2025-12-05 09:47:00.818391095 +0000 UTC m=+0.145521942 container start a3d319da62ff4fd6e39977d3010e8b2693aafce26d84ddf0c22e89ee1b4ca43b (image=quay.io/ceph/ceph:v19, name=loving_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 05 09:47:00 compute-0 podman[87752]: 2025-12-05 09:47:00.824400374 +0000 UTC m=+0.151531221 container attach a3d319da62ff4fd6e39977d3010e8b2693aafce26d84ddf0c22e89ee1b4ca43b (image=quay.io/ceph/ceph:v19, name=loving_wu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 05 09:47:00 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1148467787; not ready for session (expect reconnect)
Dec 05 09:47:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:47:00 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:00 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:47:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:47:00 compute-0 ceph-mon[74418]: osdmap e31: 3 total, 2 up, 3 in
Dec 05 09:47:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:47:00 compute-0 ceph-mon[74418]: pgmap v116: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:47:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/439154004' entity='client.admin' 
Dec 05 09:47:00 compute-0 ceph-mon[74418]: 2.1f scrub starts
Dec 05 09:47:00 compute-0 ceph-mon[74418]: 2.1f scrub ok
Dec 05 09:47:00 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 05 09:47:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:47:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:47:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:47:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Dec 05 09:47:01 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Dec 05 09:47:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:47:01 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:01 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:47:01 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 2f00832c-bb91-470e-8c94-9d9cd7a3bb6c (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 05 09:47:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Dec 05 09:47:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:47:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Dec 05 09:47:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4116606713' entity='client.admin' 
Dec 05 09:47:01 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=32 pruub=6.349681854s) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.110107422s@ mbc={}] PeeringState::start_peering_interval up [] -> [], acting [] -> [], acting_primary ? -> -1, up_primary ? -> -1, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:01 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 32 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=32 pruub=15.363299370s) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active pruub 99.123786926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:01 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=32 pruub=6.349681854s) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.110107422s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:01 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 32 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=32 pruub=15.363299370s) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown pruub 99.123786926s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:01 compute-0 systemd[1]: libpod-a3d319da62ff4fd6e39977d3010e8b2693aafce26d84ddf0c22e89ee1b4ca43b.scope: Deactivated successfully.
Dec 05 09:47:01 compute-0 podman[87752]: 2025-12-05 09:47:01.199761891 +0000 UTC m=+0.526892748 container died a3d319da62ff4fd6e39977d3010e8b2693aafce26d84ddf0c22e89ee1b4ca43b (image=quay.io/ceph/ceph:v19, name=loving_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 05 09:47:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7eb6d5b146c6636ca89784aba751a8f1dcf4822d088da001b5b5f6ca3d807cbc-merged.mount: Deactivated successfully.
Dec 05 09:47:01 compute-0 podman[87752]: 2025-12-05 09:47:01.244400079 +0000 UTC m=+0.571530926 container remove a3d319da62ff4fd6e39977d3010e8b2693aafce26d84ddf0c22e89ee1b4ca43b (image=quay.io/ceph/ceph:v19, name=loving_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 09:47:01 compute-0 systemd[1]: libpod-conmon-a3d319da62ff4fd6e39977d3010e8b2693aafce26d84ddf0c22e89ee1b4ca43b.scope: Deactivated successfully.
Dec 05 09:47:01 compute-0 sudo[87749]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:01 compute-0 sudo[87828]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vngskqisqvcmxenhzdhmlrkbiwzywmbw ; /usr/bin/python3'
Dec 05 09:47:01 compute-0 sudo[87828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:47:01 compute-0 python3[87830]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:01 compute-0 podman[87831]: 2025-12-05 09:47:01.653039545 +0000 UTC m=+0.114911399 container create c59bb06b014b2a738faf5a7d41cc602a45e34f49d5af21b439f8da763178abb6 (image=quay.io/ceph/ceph:v19, name=quirky_kilby, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:01 compute-0 podman[87831]: 2025-12-05 09:47:01.558904933 +0000 UTC m=+0.020776817 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:01 compute-0 systemd[1]: Started libpod-conmon-c59bb06b014b2a738faf5a7d41cc602a45e34f49d5af21b439f8da763178abb6.scope.
Dec 05 09:47:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8cc711e2d5533ad550a7ebed5fb988f07577e8c618020d587f95d801d660e8c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8cc711e2d5533ad550a7ebed5fb988f07577e8c618020d587f95d801d660e8c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8cc711e2d5533ad550a7ebed5fb988f07577e8c618020d587f95d801d660e8c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:01 compute-0 podman[87831]: 2025-12-05 09:47:01.774456246 +0000 UTC m=+0.236328120 container init c59bb06b014b2a738faf5a7d41cc602a45e34f49d5af21b439f8da763178abb6 (image=quay.io/ceph/ceph:v19, name=quirky_kilby, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:47:01 compute-0 podman[87831]: 2025-12-05 09:47:01.781041822 +0000 UTC m=+0.242913676 container start c59bb06b014b2a738faf5a7d41cc602a45e34f49d5af21b439f8da763178abb6 (image=quay.io/ceph/ceph:v19, name=quirky_kilby, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:01 compute-0 podman[87831]: 2025-12-05 09:47:01.784632013 +0000 UTC m=+0.246503897 container attach c59bb06b014b2a738faf5a7d41cc602a45e34f49d5af21b439f8da763178abb6 (image=quay.io/ceph/ceph:v19, name=quirky_kilby, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 09:47:01 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1148467787; not ready for session (expect reconnect)
Dec 05 09:47:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:47:01 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:01 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 05 09:47:02 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:47:02 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:47:02 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:47:02 compute-0 ceph-mon[74418]: osdmap e32: 3 total, 2 up, 3 in
Dec 05 09:47:02 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:02 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:47:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4116606713' entity='client.admin' 
Dec 05 09:47:02 compute-0 ceph-mon[74418]: 2.a scrub starts
Dec 05 09:47:02 compute-0 ceph-mon[74418]: 2.a scrub ok
Dec 05 09:47:02 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 211e3aea-f500-48b1-834d-e9c066147723 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.19( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.18( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1f( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1e( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.17( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.10( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.11( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.12( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.14( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.13( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.13( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.14( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.12( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.15( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.16( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.10( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.17( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.8( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.9( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.d( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.a( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.b( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.b( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.c( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.d( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.7( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.7( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.6( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.2( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.1( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.6( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.2( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.5( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.4( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.4( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.3( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.f( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.e( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1d( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.1a( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1c( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1b( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.1c( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1a( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.19( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.1e( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.1f( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.18( empty local-lis/les=15/16 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.10( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.12( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.11( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1f( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.15( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.16( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.17( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.14( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.8( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.9( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.b( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.13( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.d( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.c( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.0( empty local-lis/les=32/33 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.2( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.6( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.4( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.5( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.3( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.f( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.e( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1d( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1c( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1b( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1e( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.1a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.19( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.7( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 33 pg[4.18( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=15/15 les/c/f=16/16/0 sis=32) [1] r=0 lpr=32 pi=[15,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1365574913' entity='client.admin' 
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v119: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:02 compute-0 systemd[1]: libpod-c59bb06b014b2a738faf5a7d41cc602a45e34f49d5af21b439f8da763178abb6.scope: Deactivated successfully.
Dec 05 09:47:02 compute-0 podman[87831]: 2025-12-05 09:47:02.198846975 +0000 UTC m=+0.660718839 container died c59bb06b014b2a738faf5a7d41cc602a45e34f49d5af21b439f8da763178abb6 (image=quay.io/ceph/ceph:v19, name=quirky_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 09:47:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8cc711e2d5533ad550a7ebed5fb988f07577e8c618020d587f95d801d660e8c-merged.mount: Deactivated successfully.
Dec 05 09:47:02 compute-0 podman[87831]: 2025-12-05 09:47:02.234404547 +0000 UTC m=+0.696276411 container remove c59bb06b014b2a738faf5a7d41cc602a45e34f49d5af21b439f8da763178abb6 (image=quay.io/ceph/ceph:v19, name=quirky_kilby, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:47:02 compute-0 systemd[1]: libpod-conmon-c59bb06b014b2a738faf5a7d41cc602a45e34f49d5af21b439f8da763178abb6.scope: Deactivated successfully.
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:47:02 compute-0 sudo[87828]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:47:02 compute-0 sudo[87883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:47:02 compute-0 sudo[87883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:02 compute-0 sudo[87883]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 sudo[87908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph
Dec 05 09:47:02 compute-0 sudo[87908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:02 compute-0 sudo[87908]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 sudo[87933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:02 compute-0 sudo[87933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:02 compute-0 sudo[87933]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 sudo[87958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:47:02 compute-0 sudo[87958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:02 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 05 09:47:02 compute-0 sudo[87958]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 05 09:47:02 compute-0 sudo[87983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:02 compute-0 sudo[87983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:02 compute-0 sudo[87983]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 sudo[88031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:02 compute-0 sudo[88031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:02 compute-0 sudo[88031]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 sudo[88079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsrsmzihcnjynqhqsurggnpczpujoonb ; /usr/bin/python3'
Dec 05 09:47:02 compute-0 sudo[88079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:02 compute-0 sudo[88080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:02 compute-0 sudo[88080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:02 compute-0 sudo[88080]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 sudo[88107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:47:02 compute-0 sudo[88107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:02 compute-0 sudo[88107]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:02 compute-0 sudo[88132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:47:02 compute-0 sudo[88132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:02 compute-0 sudo[88132]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 python3[88098]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:02 compute-0 sudo[88157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:47:02 compute-0 sudo[88157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:02 compute-0 sudo[88157]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 sudo[88079]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1148467787; not ready for session (expect reconnect)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:47:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:02 compute-0 ceph-mgr[74711]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 09:47:03 compute-0 sudo[88195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:47:03 compute-0 sudo[88195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:03 compute-0 sudo[88195]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:03 compute-0 sudo[88220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:47:03 compute-0 sudo[88220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:03 compute-0 sudo[88220]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 05 09:47:03 compute-0 sudo[88245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:47:03 compute-0 sudo[88245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:03 compute-0 sudo[88245]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1148467787,v1:192.168.122.102:6801/1148467787] boot
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev a7e9613a-b0a9-4bc5-9d72-84aac4e2884f (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 51c14f0f-1b4d-4ba4-a0f8-d067b0097ba4 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 51c14f0f-1b4d-4ba4-a0f8-d067b0097ba4 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 6 seconds
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev ee43a326-c5a1-49fc-a694-35ba932dbb2a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event ee43a326-c5a1-49fc-a694-35ba932dbb2a (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 397ff74d-2e48-4235-9d7d-73e0dad58da9 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 397ff74d-2e48-4235-9d7d-73e0dad58da9 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 2f00832c-bb91-470e-8c94-9d9cd7a3bb6c (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 2f00832c-bb91-470e-8c94-9d9cd7a3bb6c (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 211e3aea-f500-48b1-834d-e9c066147723 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 211e3aea-f500-48b1-834d-e9c066147723 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev a7e9613a-b0a9-4bc5-9d72-84aac4e2884f (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 05 09:47:03 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event a7e9613a-b0a9-4bc5-9d72-84aac4e2884f (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec 05 09:47:03 compute-0 sudo[88293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:47:03 compute-0 sudo[88293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:03 compute-0 sudo[88293]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:47:03 compute-0 ceph-mon[74418]: osdmap e33: 3 total, 2 up, 3 in
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1365574913' entity='client.admin' 
Dec 05 09:47:03 compute-0 ceph-mon[74418]: pgmap v119: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:03 compute-0 ceph-mon[74418]: Adjusting osd_memory_target on compute-2 to 128.0M
Dec 05 09:47:03 compute-0 ceph-mon[74418]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:47:03 compute-0 ceph-mon[74418]: Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:47:03 compute-0 ceph-mon[74418]: Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:47:03 compute-0 ceph-mon[74418]: Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:47:03 compute-0 ceph-mon[74418]: 2.7 scrub starts
Dec 05 09:47:03 compute-0 ceph-mon[74418]: 2.7 scrub ok
Dec 05 09:47:03 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:03 compute-0 sudo[88344]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtjsrsuksnkhmsgpxlicrkmonsnohkfv ; /usr/bin/python3'
Dec 05 09:47:03 compute-0 sudo[88344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:03 compute-0 sudo[88341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:47:03 compute-0 sudo[88341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:03 compute-0 sudo[88341]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:03 compute-0 sudo[88369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:03 compute-0 sudo[88369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:03 compute-0 sudo[88369]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:03 compute-0 python3[88361]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.hvnxai/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:03 compute-0 podman[88394]: 2025-12-05 09:47:03.496119031 +0000 UTC m=+0.047780187 container create 83478dbf32a6666128b20c7bb023a926feb504832f5582cdcdac8952f3746f22 (image=quay.io/ceph/ceph:v19, name=optimistic_wiles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:47:03 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 05 09:47:03 compute-0 systemd[1]: Started libpod-conmon-83478dbf32a6666128b20c7bb023a926feb504832f5582cdcdac8952f3746f22.scope.
Dec 05 09:47:03 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 05 09:47:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3fae195340e7dac3beb7fec6b0c3ff83ba0da7a7185737241dd9726102ecf3b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3fae195340e7dac3beb7fec6b0c3ff83ba0da7a7185737241dd9726102ecf3b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3fae195340e7dac3beb7fec6b0c3ff83ba0da7a7185737241dd9726102ecf3b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:03 compute-0 podman[88394]: 2025-12-05 09:47:03.55958764 +0000 UTC m=+0.111248816 container init 83478dbf32a6666128b20c7bb023a926feb504832f5582cdcdac8952f3746f22 (image=quay.io/ceph/ceph:v19, name=optimistic_wiles, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:47:03 compute-0 podman[88394]: 2025-12-05 09:47:03.474799381 +0000 UTC m=+0.026460567 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:03 compute-0 podman[88394]: 2025-12-05 09:47:03.568887321 +0000 UTC m=+0.120548477 container start 83478dbf32a6666128b20c7bb023a926feb504832f5582cdcdac8952f3746f22 (image=quay.io/ceph/ceph:v19, name=optimistic_wiles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 05 09:47:03 compute-0 podman[88394]: 2025-12-05 09:47:03.572247296 +0000 UTC m=+0.123908462 container attach 83478dbf32a6666128b20c7bb023a926feb504832f5582cdcdac8952f3746f22 (image=quay.io/ceph/ceph:v19, name=optimistic_wiles, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:03 compute-0 sudo[88432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:47:03 compute-0 sudo[88432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:03 compute-0 sudo[88432]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:03 compute-0 sudo[88457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:47:03 compute-0 sudo[88457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.hvnxai/server_addr}] v 0)
Dec 05 09:47:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1754218220' entity='client.admin' 
Dec 05 09:47:04 compute-0 systemd[1]: libpod-83478dbf32a6666128b20c7bb023a926feb504832f5582cdcdac8952f3746f22.scope: Deactivated successfully.
Dec 05 09:47:04 compute-0 podman[88394]: 2025-12-05 09:47:04.16361007 +0000 UTC m=+0.715271226 container died 83478dbf32a6666128b20c7bb023a926feb504832f5582cdcdac8952f3746f22 (image=quay.io/ceph/ceph:v19, name=optimistic_wiles, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:47:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3fae195340e7dac3beb7fec6b0c3ff83ba0da7a7185737241dd9726102ecf3b-merged.mount: Deactivated successfully.
Dec 05 09:47:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v121: 162 pgs: 93 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 05 09:47:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:04 compute-0 podman[88394]: 2025-12-05 09:47:04.205798609 +0000 UTC m=+0.757459765 container remove 83478dbf32a6666128b20c7bb023a926feb504832f5582cdcdac8952f3746f22 (image=quay.io/ceph/ceph:v19, name=optimistic_wiles, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 09:47:04 compute-0 systemd[1]: libpod-conmon-83478dbf32a6666128b20c7bb023a926feb504832f5582cdcdac8952f3746f22.scope: Deactivated successfully.
Dec 05 09:47:04 compute-0 sudo[88344]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:04 compute-0 podman[88524]: 2025-12-05 09:47:04.242323819 +0000 UTC m=+0.077853355 container create a3d2d22dbc1dfaed85e73cfa7c948277153135349e497309d5710922e54f5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:47:04 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 12 completed events
Dec 05 09:47:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:47:04 compute-0 systemd[1]: Started libpod-conmon-a3d2d22dbc1dfaed85e73cfa7c948277153135349e497309d5710922e54f5715.scope.
Dec 05 09:47:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:04 compute-0 ceph-mon[74418]: 4.10 scrub starts
Dec 05 09:47:04 compute-0 ceph-mon[74418]: 4.10 scrub ok
Dec 05 09:47:04 compute-0 ceph-mon[74418]: OSD bench result of 6026.103889 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 09:47:04 compute-0 ceph-mon[74418]: Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:04 compute-0 ceph-mon[74418]: Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:04 compute-0 ceph-mon[74418]: Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:47:04 compute-0 ceph-mon[74418]: osd.2 [v2:192.168.122.102:6800/1148467787,v1:192.168.122.102:6801/1148467787] boot
Dec 05 09:47:04 compute-0 ceph-mon[74418]: osdmap e34: 3 total, 3 up, 3 in
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:04 compute-0 ceph-mon[74418]: 2.6 scrub starts
Dec 05 09:47:04 compute-0 ceph-mon[74418]: 2.6 scrub ok
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1754218220' entity='client.admin' 
Dec 05 09:47:04 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:04 compute-0 podman[88524]: 2025-12-05 09:47:04.296476434 +0000 UTC m=+0.132005970 container init a3d2d22dbc1dfaed85e73cfa7c948277153135349e497309d5710922e54f5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 09:47:04 compute-0 podman[88524]: 2025-12-05 09:47:04.302487694 +0000 UTC m=+0.138017220 container start a3d2d22dbc1dfaed85e73cfa7c948277153135349e497309d5710922e54f5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_wilbur, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:47:04 compute-0 nifty_wilbur[88552]: 167 167
Dec 05 09:47:04 compute-0 podman[88524]: 2025-12-05 09:47:04.306022223 +0000 UTC m=+0.141551759 container attach a3d2d22dbc1dfaed85e73cfa7c948277153135349e497309d5710922e54f5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_wilbur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 09:47:04 compute-0 systemd[1]: libpod-a3d2d22dbc1dfaed85e73cfa7c948277153135349e497309d5710922e54f5715.scope: Deactivated successfully.
Dec 05 09:47:04 compute-0 conmon[88552]: conmon a3d2d22dbc1dfaed85e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3d2d22dbc1dfaed85e73cfa7c948277153135349e497309d5710922e54f5715.scope/container/memory.events
Dec 05 09:47:04 compute-0 podman[88524]: 2025-12-05 09:47:04.308282177 +0000 UTC m=+0.143811723 container died a3d2d22dbc1dfaed85e73cfa7c948277153135349e497309d5710922e54f5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_wilbur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 09:47:04 compute-0 podman[88524]: 2025-12-05 09:47:04.223778856 +0000 UTC m=+0.059308412 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:47:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b2d23c2c1bd36eca510391ef6f56fdc733cc587866757d606dca7869c26ee56-merged.mount: Deactivated successfully.
Dec 05 09:47:04 compute-0 podman[88524]: 2025-12-05 09:47:04.339849636 +0000 UTC m=+0.175379172 container remove a3d2d22dbc1dfaed85e73cfa7c948277153135349e497309d5710922e54f5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_wilbur, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 09:47:04 compute-0 systemd[1]: libpod-conmon-a3d2d22dbc1dfaed85e73cfa7c948277153135349e497309d5710922e54f5715.scope: Deactivated successfully.
Dec 05 09:47:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 05 09:47:04 compute-0 podman[88575]: 2025-12-05 09:47:04.476026024 +0000 UTC m=+0.041764368 container create c92b5097b9c83800e3538af7d775758c3f94b3cff597899d9f60ee19a5a33ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_euclid, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 09:47:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:47:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 05 09:47:04 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.1f( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.1e( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.1c( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.1e( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.1c( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.1f( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.1b( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.8( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.4( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.3( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.4( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.3( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.2( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.2( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.1( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.5( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.1( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.5( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.6( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=34 pruub=12.200570107s) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active pruub 99.266403198s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=5.195495605s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.261352539s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.7( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.6( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.7( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=34 pruub=3.044110298s) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.110107422s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.a( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=34 pruub=3.044086218s) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.110107422s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.a( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.b( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.c( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.b( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.c( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.d( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.d( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.f( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.10( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.f( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.10( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.12( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.12( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.13( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.13( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.14( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.14( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.16( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.16( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.17( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.17( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.18( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.18( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 34 pg[3.19( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[3.19( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=34) [2] r=-1 lpr=34 pi=[14,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=5.193493843s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.261352539s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=34 pruub=12.200570107s) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown pruub 99.266403198s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.1( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.2( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.3( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.4( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.b( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.c( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.5( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.6( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.f( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.10( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.11( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.12( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.d( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.e( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.7( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.8( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.9( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.a( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.13( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.14( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.15( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.16( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.17( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.18( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.19( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.1a( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.1c( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.1b( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.1d( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.1e( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[5.1f( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.2( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.3( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.4( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.5( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.6( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.7( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.8( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.1( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.d( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.e( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.9( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.a( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.11( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.12( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.f( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.10( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.b( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.c( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.13( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.14( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.15( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.17( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.18( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.16( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.19( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.1a( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.1b( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.1c( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.1d( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.1e( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 35 pg[6.1f( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:04 compute-0 systemd[1]: Started libpod-conmon-c92b5097b9c83800e3538af7d775758c3f94b3cff597899d9f60ee19a5a33ed1.scope.
Dec 05 09:47:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:04 compute-0 podman[88575]: 2025-12-05 09:47:04.455194657 +0000 UTC m=+0.020933011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39ec125de79f7e542cdecf74c0615bf019033b738daff8a0b0a681570306f9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39ec125de79f7e542cdecf74c0615bf019033b738daff8a0b0a681570306f9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39ec125de79f7e542cdecf74c0615bf019033b738daff8a0b0a681570306f9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39ec125de79f7e542cdecf74c0615bf019033b738daff8a0b0a681570306f9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39ec125de79f7e542cdecf74c0615bf019033b738daff8a0b0a681570306f9b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:04 compute-0 podman[88575]: 2025-12-05 09:47:04.569382954 +0000 UTC m=+0.135121338 container init c92b5097b9c83800e3538af7d775758c3f94b3cff597899d9f60ee19a5a33ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 09:47:04 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 05 09:47:04 compute-0 podman[88575]: 2025-12-05 09:47:04.577463062 +0000 UTC m=+0.143201406 container start c92b5097b9c83800e3538af7d775758c3f94b3cff597899d9f60ee19a5a33ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:47:04 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 05 09:47:04 compute-0 podman[88575]: 2025-12-05 09:47:04.581028252 +0000 UTC m=+0.146766616 container attach c92b5097b9c83800e3538af7d775758c3f94b3cff597899d9f60ee19a5a33ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_euclid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec 05 09:47:04 compute-0 pedantic_euclid[88591]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:47:04 compute-0 pedantic_euclid[88591]: --> All data devices are unavailable
Dec 05 09:47:04 compute-0 systemd[1]: libpod-c92b5097b9c83800e3538af7d775758c3f94b3cff597899d9f60ee19a5a33ed1.scope: Deactivated successfully.
Dec 05 09:47:04 compute-0 podman[88575]: 2025-12-05 09:47:04.943553129 +0000 UTC m=+0.509291473 container died c92b5097b9c83800e3538af7d775758c3f94b3cff597899d9f60ee19a5a33ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_euclid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 09:47:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e39ec125de79f7e542cdecf74c0615bf019033b738daff8a0b0a681570306f9b-merged.mount: Deactivated successfully.
Dec 05 09:47:04 compute-0 sudo[88639]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yervrplghafsedinpjovwfukhukukhkl ; /usr/bin/python3'
Dec 05 09:47:05 compute-0 podman[88575]: 2025-12-05 09:47:05.002854339 +0000 UTC m=+0.568592683 container remove c92b5097b9c83800e3538af7d775758c3f94b3cff597899d9f60ee19a5a33ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_euclid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:05 compute-0 sudo[88639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:05 compute-0 systemd[1]: libpod-conmon-c92b5097b9c83800e3538af7d775758c3f94b3cff597899d9f60ee19a5a33ed1.scope: Deactivated successfully.
Dec 05 09:47:05 compute-0 sudo[88457]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:05 compute-0 sudo[88642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:47:05 compute-0 sudo[88642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:05 compute-0 sudo[88642]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:05 compute-0 sudo[88667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:47:05 compute-0 sudo[88667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:05 compute-0 python3[88641]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.unhddt/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:05 compute-0 podman[88692]: 2025-12-05 09:47:05.243398998 +0000 UTC m=+0.045032990 container create befce6b65a98380dc7d4144a87e82dff746908e01fe46b9b544893827ae0d1db (image=quay.io/ceph/ceph:v19, name=vibrant_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:47:05 compute-0 systemd[1]: Started libpod-conmon-befce6b65a98380dc7d4144a87e82dff746908e01fe46b9b544893827ae0d1db.scope.
Dec 05 09:47:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:05 compute-0 ceph-mon[74418]: 4.12 scrub starts
Dec 05 09:47:05 compute-0 ceph-mon[74418]: 4.12 scrub ok
Dec 05 09:47:05 compute-0 ceph-mon[74418]: pgmap v121: 162 pgs: 93 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:05 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:05 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:47:05 compute-0 ceph-mon[74418]: osdmap e35: 3 total, 3 up, 3 in
Dec 05 09:47:05 compute-0 ceph-mon[74418]: 2.9 scrub starts
Dec 05 09:47:05 compute-0 ceph-mon[74418]: 2.9 scrub ok
Dec 05 09:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a90a204ebdb50aae9aaa9fd9447d8a613ceba9eeb7ad17a06ae2689aa572e6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a90a204ebdb50aae9aaa9fd9447d8a613ceba9eeb7ad17a06ae2689aa572e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a90a204ebdb50aae9aaa9fd9447d8a613ceba9eeb7ad17a06ae2689aa572e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:05 compute-0 podman[88692]: 2025-12-05 09:47:05.315650774 +0000 UTC m=+0.117284786 container init befce6b65a98380dc7d4144a87e82dff746908e01fe46b9b544893827ae0d1db (image=quay.io/ceph/ceph:v19, name=vibrant_ellis, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 09:47:05 compute-0 podman[88692]: 2025-12-05 09:47:05.224769553 +0000 UTC m=+0.026403575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:05 compute-0 podman[88692]: 2025-12-05 09:47:05.326390926 +0000 UTC m=+0.128024918 container start befce6b65a98380dc7d4144a87e82dff746908e01fe46b9b544893827ae0d1db (image=quay.io/ceph/ceph:v19, name=vibrant_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 09:47:05 compute-0 podman[88692]: 2025-12-05 09:47:05.329751281 +0000 UTC m=+0.131385273 container attach befce6b65a98380dc7d4144a87e82dff746908e01fe46b9b544893827ae0d1db (image=quay.io/ceph/ceph:v19, name=vibrant_ellis, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:47:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 05 09:47:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 05 09:47:05 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.1b( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.1a( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.18( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.19( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.1f( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.c( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.1e( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.d( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.1( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.7( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.6( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.0( empty local-lis/les=34/36 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.4( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.2( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.5( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.3( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.e( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.9( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.8( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.b( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.a( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.15( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.14( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.17( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.16( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.11( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.13( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.12( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.1d( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.1c( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.10( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 36 pg[6.f( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [1] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:05 compute-0 podman[88769]: 2025-12-05 09:47:05.571894644 +0000 UTC m=+0.041970273 container create 88ad8ff644242aec72f729b6ce48df87297e5da38e27d8c1cc8ae1c78793fa91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:05 compute-0 systemd[1]: Started libpod-conmon-88ad8ff644242aec72f729b6ce48df87297e5da38e27d8c1cc8ae1c78793fa91.scope.
Dec 05 09:47:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:05 compute-0 podman[88769]: 2025-12-05 09:47:05.647599118 +0000 UTC m=+0.117674737 container init 88ad8ff644242aec72f729b6ce48df87297e5da38e27d8c1cc8ae1c78793fa91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:47:05 compute-0 podman[88769]: 2025-12-05 09:47:05.555594965 +0000 UTC m=+0.025670594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:47:05 compute-0 podman[88769]: 2025-12-05 09:47:05.652587568 +0000 UTC m=+0.122663207 container start 88ad8ff644242aec72f729b6ce48df87297e5da38e27d8c1cc8ae1c78793fa91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 09:47:05 compute-0 quirky_hofstadter[88786]: 167 167
Dec 05 09:47:05 compute-0 systemd[1]: libpod-88ad8ff644242aec72f729b6ce48df87297e5da38e27d8c1cc8ae1c78793fa91.scope: Deactivated successfully.
Dec 05 09:47:05 compute-0 podman[88769]: 2025-12-05 09:47:05.658031052 +0000 UTC m=+0.128106701 container attach 88ad8ff644242aec72f729b6ce48df87297e5da38e27d8c1cc8ae1c78793fa91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 09:47:05 compute-0 podman[88769]: 2025-12-05 09:47:05.658394222 +0000 UTC m=+0.128469851 container died 88ad8ff644242aec72f729b6ce48df87297e5da38e27d8c1cc8ae1c78793fa91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e7e2dd4ee7cd359a92f977bd136bae510b82705f74898e7e3ae9db5f58fcbe-merged.mount: Deactivated successfully.
Dec 05 09:47:05 compute-0 podman[88769]: 2025-12-05 09:47:05.733018705 +0000 UTC m=+0.203094334 container remove 88ad8ff644242aec72f729b6ce48df87297e5da38e27d8c1cc8ae1c78793fa91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 05 09:47:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.unhddt/server_addr}] v 0)
Dec 05 09:47:05 compute-0 systemd[1]: libpod-conmon-88ad8ff644242aec72f729b6ce48df87297e5da38e27d8c1cc8ae1c78793fa91.scope: Deactivated successfully.
Dec 05 09:47:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1474032133' entity='client.admin' 
Dec 05 09:47:05 compute-0 systemd[1]: libpod-befce6b65a98380dc7d4144a87e82dff746908e01fe46b9b544893827ae0d1db.scope: Deactivated successfully.
Dec 05 09:47:05 compute-0 podman[88692]: 2025-12-05 09:47:05.77935811 +0000 UTC m=+0.580992122 container died befce6b65a98380dc7d4144a87e82dff746908e01fe46b9b544893827ae0d1db (image=quay.io/ceph/ceph:v19, name=vibrant_ellis, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8a90a204ebdb50aae9aaa9fd9447d8a613ceba9eeb7ad17a06ae2689aa572e6-merged.mount: Deactivated successfully.
Dec 05 09:47:05 compute-0 podman[88692]: 2025-12-05 09:47:05.821421636 +0000 UTC m=+0.623055628 container remove befce6b65a98380dc7d4144a87e82dff746908e01fe46b9b544893827ae0d1db (image=quay.io/ceph/ceph:v19, name=vibrant_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 09:47:05 compute-0 systemd[1]: libpod-conmon-befce6b65a98380dc7d4144a87e82dff746908e01fe46b9b544893827ae0d1db.scope: Deactivated successfully.
Dec 05 09:47:05 compute-0 sudo[88639]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:05 compute-0 podman[88823]: 2025-12-05 09:47:05.908398637 +0000 UTC m=+0.045403231 container create 7db4b1927ed6198b787af4faa0a10d4a45982d11b36bd7e43c2c447dca08a9d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:05 compute-0 systemd[1]: Started libpod-conmon-7db4b1927ed6198b787af4faa0a10d4a45982d11b36bd7e43c2c447dca08a9d0.scope.
Dec 05 09:47:05 compute-0 podman[88823]: 2025-12-05 09:47:05.887486948 +0000 UTC m=+0.024491552 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:47:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72421a5a430296ed2299a48b4cb79d99b5ecb349cbe041be2ecae04dfc7c1885/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72421a5a430296ed2299a48b4cb79d99b5ecb349cbe041be2ecae04dfc7c1885/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72421a5a430296ed2299a48b4cb79d99b5ecb349cbe041be2ecae04dfc7c1885/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72421a5a430296ed2299a48b4cb79d99b5ecb349cbe041be2ecae04dfc7c1885/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:06 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 05 09:47:06 compute-0 podman[88823]: 2025-12-05 09:47:06.010959377 +0000 UTC m=+0.147963991 container init 7db4b1927ed6198b787af4faa0a10d4a45982d11b36bd7e43c2c447dca08a9d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 05 09:47:06 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 05 09:47:06 compute-0 podman[88823]: 2025-12-05 09:47:06.019828587 +0000 UTC m=+0.156833171 container start 7db4b1927ed6198b787af4faa0a10d4a45982d11b36bd7e43c2c447dca08a9d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 09:47:06 compute-0 podman[88823]: 2025-12-05 09:47:06.025293531 +0000 UTC m=+0.162298115 container attach 7db4b1927ed6198b787af4faa0a10d4a45982d11b36bd7e43c2c447dca08a9d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v124: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:06 compute-0 fervent_poitras[88840]: {
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:     "1": [
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:         {
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             "devices": [
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "/dev/loop3"
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             ],
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             "lv_name": "ceph_lv0",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             "lv_size": "21470642176",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             "name": "ceph_lv0",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             "tags": {
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.cluster_name": "ceph",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.crush_device_class": "",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.encrypted": "0",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.osd_id": "1",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.type": "block",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.vdo": "0",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:                 "ceph.with_tpm": "0"
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             },
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             "type": "block",
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:             "vg_name": "ceph_vg0"
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:         }
Dec 05 09:47:06 compute-0 fervent_poitras[88840]:     ]
Dec 05 09:47:06 compute-0 fervent_poitras[88840]: }
Dec 05 09:47:06 compute-0 ceph-mon[74418]: 4.11 scrub starts
Dec 05 09:47:06 compute-0 ceph-mon[74418]: 4.11 scrub ok
Dec 05 09:47:06 compute-0 ceph-mon[74418]: osdmap e36: 3 total, 3 up, 3 in
Dec 05 09:47:06 compute-0 ceph-mon[74418]: 3.16 scrub starts
Dec 05 09:47:06 compute-0 ceph-mon[74418]: 3.16 scrub ok
Dec 05 09:47:06 compute-0 ceph-mon[74418]: 2.4 scrub starts
Dec 05 09:47:06 compute-0 ceph-mon[74418]: 2.4 scrub ok
Dec 05 09:47:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1474032133' entity='client.admin' 
Dec 05 09:47:06 compute-0 systemd[1]: libpod-7db4b1927ed6198b787af4faa0a10d4a45982d11b36bd7e43c2c447dca08a9d0.scope: Deactivated successfully.
Dec 05 09:47:06 compute-0 podman[88823]: 2025-12-05 09:47:06.332424936 +0000 UTC m=+0.469429530 container died 7db4b1927ed6198b787af4faa0a10d4a45982d11b36bd7e43c2c447dca08a9d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-72421a5a430296ed2299a48b4cb79d99b5ecb349cbe041be2ecae04dfc7c1885-merged.mount: Deactivated successfully.
Dec 05 09:47:06 compute-0 podman[88823]: 2025-12-05 09:47:06.381084347 +0000 UTC m=+0.518088931 container remove 7db4b1927ed6198b787af4faa0a10d4a45982d11b36bd7e43c2c447dca08a9d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 05 09:47:06 compute-0 systemd[1]: libpod-conmon-7db4b1927ed6198b787af4faa0a10d4a45982d11b36bd7e43c2c447dca08a9d0.scope: Deactivated successfully.
Dec 05 09:47:06 compute-0 sudo[88667]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:06 compute-0 sudo[88860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:47:06 compute-0 sudo[88860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:06 compute-0 sudo[88860]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:47:06 compute-0 sudo[88885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:47:06 compute-0 sudo[88885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:06 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 05 09:47:06 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 05 09:47:06 compute-0 sudo[88933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbjpbluaklxdoabalautmnytsliswloj ; /usr/bin/python3'
Dec 05 09:47:06 compute-0 sudo[88933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:06 compute-0 python3[88935]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.wewrgp/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:06 compute-0 podman[88936]: 2025-12-05 09:47:06.81061316 +0000 UTC m=+0.043456705 container create 3f2e8703b14cdcb00aa30f2d5e845c0c767bff10983a3bfae39799f6432ad222 (image=quay.io/ceph/ceph:v19, name=eloquent_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 09:47:06 compute-0 systemd[1]: Started libpod-conmon-3f2e8703b14cdcb00aa30f2d5e845c0c767bff10983a3bfae39799f6432ad222.scope.
Dec 05 09:47:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b9bfea85dda308653ce28746b9dae102069c947f98a321e1b94ced21b0f72a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b9bfea85dda308653ce28746b9dae102069c947f98a321e1b94ced21b0f72a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b9bfea85dda308653ce28746b9dae102069c947f98a321e1b94ced21b0f72a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:06 compute-0 podman[88936]: 2025-12-05 09:47:06.791399609 +0000 UTC m=+0.024243114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:06 compute-0 podman[88936]: 2025-12-05 09:47:06.888581867 +0000 UTC m=+0.121425382 container init 3f2e8703b14cdcb00aa30f2d5e845c0c767bff10983a3bfae39799f6432ad222 (image=quay.io/ceph/ceph:v19, name=eloquent_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Dec 05 09:47:06 compute-0 podman[88936]: 2025-12-05 09:47:06.896745528 +0000 UTC m=+0.129589023 container start 3f2e8703b14cdcb00aa30f2d5e845c0c767bff10983a3bfae39799f6432ad222 (image=quay.io/ceph/ceph:v19, name=eloquent_shamir, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:47:06 compute-0 podman[88936]: 2025-12-05 09:47:06.901629575 +0000 UTC m=+0.134473100 container attach 3f2e8703b14cdcb00aa30f2d5e845c0c767bff10983a3bfae39799f6432ad222 (image=quay.io/ceph/ceph:v19, name=eloquent_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:47:07 compute-0 podman[88993]: 2025-12-05 09:47:07.020215107 +0000 UTC m=+0.042212701 container create ebec5203b92805e0519f947a68a0f91e1be435861a59e89442623f99ff62afaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 09:47:07 compute-0 systemd[1]: Started libpod-conmon-ebec5203b92805e0519f947a68a0f91e1be435861a59e89442623f99ff62afaf.scope.
Dec 05 09:47:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:07 compute-0 podman[88993]: 2025-12-05 09:47:07.093872172 +0000 UTC m=+0.115869776 container init ebec5203b92805e0519f947a68a0f91e1be435861a59e89442623f99ff62afaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 09:47:07 compute-0 podman[88993]: 2025-12-05 09:47:07.098649767 +0000 UTC m=+0.120647361 container start ebec5203b92805e0519f947a68a0f91e1be435861a59e89442623f99ff62afaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_darwin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 09:47:07 compute-0 podman[88993]: 2025-12-05 09:47:07.003176977 +0000 UTC m=+0.025174601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:47:07 compute-0 angry_darwin[89028]: 167 167
Dec 05 09:47:07 compute-0 systemd[1]: libpod-ebec5203b92805e0519f947a68a0f91e1be435861a59e89442623f99ff62afaf.scope: Deactivated successfully.
Dec 05 09:47:07 compute-0 podman[88993]: 2025-12-05 09:47:07.104368998 +0000 UTC m=+0.126366632 container attach ebec5203b92805e0519f947a68a0f91e1be435861a59e89442623f99ff62afaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_darwin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Dec 05 09:47:07 compute-0 podman[88993]: 2025-12-05 09:47:07.104738138 +0000 UTC m=+0.126735762 container died ebec5203b92805e0519f947a68a0f91e1be435861a59e89442623f99ff62afaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_darwin, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 05 09:47:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8309a3fc7717cbeb80e5740047a22a74ee816fce80fb180c3b772bfb6e5f9cbe-merged.mount: Deactivated successfully.
Dec 05 09:47:07 compute-0 podman[88993]: 2025-12-05 09:47:07.142934145 +0000 UTC m=+0.164931739 container remove ebec5203b92805e0519f947a68a0f91e1be435861a59e89442623f99ff62afaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_darwin, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 09:47:07 compute-0 systemd[1]: libpod-conmon-ebec5203b92805e0519f947a68a0f91e1be435861a59e89442623f99ff62afaf.scope: Deactivated successfully.
Dec 05 09:47:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.wewrgp/server_addr}] v 0)
Dec 05 09:47:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2083845623' entity='client.admin' 
Dec 05 09:47:07 compute-0 podman[89052]: 2025-12-05 09:47:07.302021618 +0000 UTC m=+0.043457846 container create d37e8ae20b15ed3462da631415cde61696973af1c2ee5d6808590cb28cad49dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 05 09:47:07 compute-0 systemd[1]: libpod-3f2e8703b14cdcb00aa30f2d5e845c0c767bff10983a3bfae39799f6432ad222.scope: Deactivated successfully.
Dec 05 09:47:07 compute-0 podman[88936]: 2025-12-05 09:47:07.315424646 +0000 UTC m=+0.548268171 container died 3f2e8703b14cdcb00aa30f2d5e845c0c767bff10983a3bfae39799f6432ad222 (image=quay.io/ceph/ceph:v19, name=eloquent_shamir, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 05 09:47:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 05 09:47:07 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 05 09:47:07 compute-0 systemd[1]: Started libpod-conmon-d37e8ae20b15ed3462da631415cde61696973af1c2ee5d6808590cb28cad49dd.scope.
Dec 05 09:47:07 compute-0 ceph-mon[74418]: 4.1f scrub starts
Dec 05 09:47:07 compute-0 ceph-mon[74418]: 4.1f scrub ok
Dec 05 09:47:07 compute-0 ceph-mon[74418]: pgmap v124: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:07 compute-0 ceph-mon[74418]: 3.17 scrub starts
Dec 05 09:47:07 compute-0 ceph-mon[74418]: 3.17 scrub ok
Dec 05 09:47:07 compute-0 ceph-mon[74418]: 2.1d scrub starts
Dec 05 09:47:07 compute-0 ceph-mon[74418]: 2.1d scrub ok
Dec 05 09:47:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2083845623' entity='client.admin' 
Dec 05 09:47:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-46b9bfea85dda308653ce28746b9dae102069c947f98a321e1b94ced21b0f72a-merged.mount: Deactivated successfully.
Dec 05 09:47:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:07 compute-0 podman[88936]: 2025-12-05 09:47:07.36421279 +0000 UTC m=+0.597056295 container remove 3f2e8703b14cdcb00aa30f2d5e845c0c767bff10983a3bfae39799f6432ad222 (image=quay.io/ceph/ceph:v19, name=eloquent_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:47:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea89006ffe5036ca45e166ce035965eb7bff4bbc3973b1eac111978ed3e825c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea89006ffe5036ca45e166ce035965eb7bff4bbc3973b1eac111978ed3e825c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea89006ffe5036ca45e166ce035965eb7bff4bbc3973b1eac111978ed3e825c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea89006ffe5036ca45e166ce035965eb7bff4bbc3973b1eac111978ed3e825c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:07 compute-0 systemd[1]: libpod-conmon-3f2e8703b14cdcb00aa30f2d5e845c0c767bff10983a3bfae39799f6432ad222.scope: Deactivated successfully.
Dec 05 09:47:07 compute-0 podman[89052]: 2025-12-05 09:47:07.283541407 +0000 UTC m=+0.024977655 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:47:07 compute-0 podman[89052]: 2025-12-05 09:47:07.38232847 +0000 UTC m=+0.123764728 container init d37e8ae20b15ed3462da631415cde61696973af1c2ee5d6808590cb28cad49dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 09:47:07 compute-0 podman[89052]: 2025-12-05 09:47:07.389220975 +0000 UTC m=+0.130657203 container start d37e8ae20b15ed3462da631415cde61696973af1c2ee5d6808590cb28cad49dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_meninsky, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:07 compute-0 sudo[88933]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:07 compute-0 podman[89052]: 2025-12-05 09:47:07.392971081 +0000 UTC m=+0.134407319 container attach d37e8ae20b15ed3462da631415cde61696973af1c2ee5d6808590cb28cad49dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:07 compute-0 sudo[89112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoddomtuabgsqiswzinieznxjdmyjvos ; /usr/bin/python3'
Dec 05 09:47:07 compute-0 sudo[89112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:07 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 05 09:47:07 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 05 09:47:07 compute-0 python3[89115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:07 compute-0 podman[89143]: 2025-12-05 09:47:07.762637457 +0000 UTC m=+0.028204585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:07 compute-0 podman[89143]: 2025-12-05 09:47:07.993816352 +0000 UTC m=+0.259383460 container create 3e7c03c6a137ba0b4353edd43681c397615d7b1d8acc6d4787a1ea954e2722d6 (image=quay.io/ceph/ceph:v19, name=priceless_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 09:47:07 compute-0 lvm[89194]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:47:07 compute-0 lvm[89194]: VG ceph_vg0 finished
Dec 05 09:47:08 compute-0 affectionate_meninsky[89081]: {}
Dec 05 09:47:08 compute-0 systemd[1]: libpod-d37e8ae20b15ed3462da631415cde61696973af1c2ee5d6808590cb28cad49dd.scope: Deactivated successfully.
Dec 05 09:47:08 compute-0 systemd[1]: libpod-d37e8ae20b15ed3462da631415cde61696973af1c2ee5d6808590cb28cad49dd.scope: Consumed 1.089s CPU time.
Dec 05 09:47:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v126: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:08 compute-0 podman[89052]: 2025-12-05 09:47:08.310942258 +0000 UTC m=+1.052378506 container died d37e8ae20b15ed3462da631415cde61696973af1c2ee5d6808590cb28cad49dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_meninsky, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:47:08 compute-0 systemd[1]: Started libpod-conmon-3e7c03c6a137ba0b4353edd43681c397615d7b1d8acc6d4787a1ea954e2722d6.scope.
Dec 05 09:47:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9058cad8fddd448f283848122c6b1f617afd890374e3947c2dfe97e52dbf5944/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9058cad8fddd448f283848122c6b1f617afd890374e3947c2dfe97e52dbf5944/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9058cad8fddd448f283848122c6b1f617afd890374e3947c2dfe97e52dbf5944/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:08 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 05 09:47:08 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 05 09:47:08 compute-0 ceph-mon[74418]: 4.15 scrub starts
Dec 05 09:47:08 compute-0 ceph-mon[74418]: 4.15 scrub ok
Dec 05 09:47:08 compute-0 ceph-mon[74418]: osdmap e37: 3 total, 3 up, 3 in
Dec 05 09:47:08 compute-0 ceph-mon[74418]: 3.15 scrub starts
Dec 05 09:47:08 compute-0 ceph-mon[74418]: 3.15 scrub ok
Dec 05 09:47:08 compute-0 ceph-mon[74418]: 2.1b scrub starts
Dec 05 09:47:08 compute-0 ceph-mon[74418]: 2.1b scrub ok
Dec 05 09:47:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea89006ffe5036ca45e166ce035965eb7bff4bbc3973b1eac111978ed3e825c6-merged.mount: Deactivated successfully.
Dec 05 09:47:09 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 05 09:47:09 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 05 09:47:10 compute-0 podman[89198]: 2025-12-05 09:47:10.168400529 +0000 UTC m=+2.072488391 container remove d37e8ae20b15ed3462da631415cde61696973af1c2ee5d6808590cb28cad49dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:47:10 compute-0 systemd[1]: libpod-conmon-d37e8ae20b15ed3462da631415cde61696973af1c2ee5d6808590cb28cad49dd.scope: Deactivated successfully.
Dec 05 09:47:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v127: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:10 compute-0 sudo[88885]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:47:10 compute-0 podman[89143]: 2025-12-05 09:47:10.548984504 +0000 UTC m=+2.814551642 container init 3e7c03c6a137ba0b4353edd43681c397615d7b1d8acc6d4787a1ea954e2722d6 (image=quay.io/ceph/ceph:v19, name=priceless_goodall, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 09:47:10 compute-0 podman[89143]: 2025-12-05 09:47:10.558536333 +0000 UTC m=+2.824103441 container start 3e7c03c6a137ba0b4353edd43681c397615d7b1d8acc6d4787a1ea954e2722d6 (image=quay.io/ceph/ceph:v19, name=priceless_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Dec 05 09:47:10 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 05 09:47:10 compute-0 ceph-mon[74418]: 4.16 scrub starts
Dec 05 09:47:10 compute-0 ceph-mon[74418]: 4.16 scrub ok
Dec 05 09:47:10 compute-0 ceph-mon[74418]: pgmap v126: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:10 compute-0 ceph-mon[74418]: 5.13 scrub starts
Dec 05 09:47:10 compute-0 ceph-mon[74418]: 5.13 scrub ok
Dec 05 09:47:10 compute-0 ceph-mon[74418]: 4.17 scrub starts
Dec 05 09:47:10 compute-0 ceph-mon[74418]: 4.17 scrub ok
Dec 05 09:47:10 compute-0 ceph-mon[74418]: 2.1e scrub starts
Dec 05 09:47:10 compute-0 ceph-mon[74418]: 2.1e scrub ok
Dec 05 09:47:10 compute-0 ceph-mon[74418]: 3.14 deep-scrub starts
Dec 05 09:47:10 compute-0 ceph-mon[74418]: 3.14 deep-scrub ok
Dec 05 09:47:10 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 05 09:47:10 compute-0 podman[89143]: 2025-12-05 09:47:10.703015685 +0000 UTC m=+2.968582803 container attach 3e7c03c6a137ba0b4353edd43681c397615d7b1d8acc6d4787a1ea954e2722d6 (image=quay.io/ceph/ceph:v19, name=priceless_goodall, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:47:10 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:47:10 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:10 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 9a874e85-2ce2-49ea-b155-20f39657b313 (Updating rgw.rgw deployment (+3 -> 3))
Dec 05 09:47:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gzawrf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 05 09:47:10 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gzawrf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:47:10 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gzawrf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:47:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 05 09:47:10 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:47:10 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:10 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.gzawrf on compute-2
Dec 05 09:47:10 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.gzawrf on compute-2
Dec 05 09:47:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec 05 09:47:11 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1169778095' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 05 09:47:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:47:11 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 05 09:47:11 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 4.14 scrub starts
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 4.14 scrub ok
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 2.8 scrub starts
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 2.8 scrub ok
Dec 05 09:47:11 compute-0 ceph-mon[74418]: pgmap v127: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 5.11 scrub starts
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 5.11 scrub ok
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 2.1 scrub starts
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 2.1 scrub ok
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 4.8 scrub starts
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 4.8 scrub ok
Dec 05 09:47:11 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:11 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:11 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gzawrf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:47:11 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gzawrf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:47:11 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:11 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:11 compute-0 ceph-mon[74418]: Deploying daemon rgw.rgw.compute-2.gzawrf on compute-2
Dec 05 09:47:11 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1169778095' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 5.10 deep-scrub starts
Dec 05 09:47:11 compute-0 ceph-mon[74418]: 5.10 deep-scrub ok
Dec 05 09:47:11 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1169778095' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 05 09:47:11 compute-0 priceless_goodall[89216]: module 'dashboard' is already disabled
Dec 05 09:47:11 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.hvnxai(active, since 3m), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:47:11 compute-0 systemd[1]: libpod-3e7c03c6a137ba0b4353edd43681c397615d7b1d8acc6d4787a1ea954e2722d6.scope: Deactivated successfully.
Dec 05 09:47:11 compute-0 podman[89143]: 2025-12-05 09:47:11.901190847 +0000 UTC m=+4.166758015 container died 3e7c03c6a137ba0b4353edd43681c397615d7b1d8acc6d4787a1ea954e2722d6 (image=quay.io/ceph/ceph:v19, name=priceless_goodall, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:47:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-9058cad8fddd448f283848122c6b1f617afd890374e3947c2dfe97e52dbf5944-merged.mount: Deactivated successfully.
Dec 05 09:47:11 compute-0 podman[89143]: 2025-12-05 09:47:11.94565783 +0000 UTC m=+4.211224938 container remove 3e7c03c6a137ba0b4353edd43681c397615d7b1d8acc6d4787a1ea954e2722d6 (image=quay.io/ceph/ceph:v19, name=priceless_goodall, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:47:11 compute-0 systemd[1]: libpod-conmon-3e7c03c6a137ba0b4353edd43681c397615d7b1d8acc6d4787a1ea954e2722d6.scope: Deactivated successfully.
Dec 05 09:47:11 compute-0 sudo[89112]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:12 compute-0 sudo[89277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onilbquhofqrqyvndpomjenppkcwqlml ; /usr/bin/python3'
Dec 05 09:47:12 compute-0 sudo[89277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v128: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 python3[89279]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:12 compute-0 podman[89280]: 2025-12-05 09:47:12.332712198 +0000 UTC m=+0.026977351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:12 compute-0 podman[89280]: 2025-12-05 09:47:12.490430952 +0000 UTC m=+0.184696065 container create 5511e28a53fa53a3d324b9dd9de5b74b724f33ef8490b3d7e37fe9d5c2fd0669 (image=quay.io/ceph/ceph:v19, name=vibrant_knuth, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:12 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 05 09:47:12 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 05 09:47:12 compute-0 systemd[1]: Started libpod-conmon-5511e28a53fa53a3d324b9dd9de5b74b724f33ef8490b3d7e37fe9d5c2fd0669.scope.
Dec 05 09:47:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a21622c1c747b38c8004e8302a88c0e23c02d2a1e1b2f5cd99b26de8844f11/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a21622c1c747b38c8004e8302a88c0e23c02d2a1e1b2f5cd99b26de8844f11/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a21622c1c747b38c8004e8302a88c0e23c02d2a1e1b2f5cd99b26de8844f11/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:12 compute-0 podman[89280]: 2025-12-05 09:47:12.656507842 +0000 UTC m=+0.350772965 container init 5511e28a53fa53a3d324b9dd9de5b74b724f33ef8490b3d7e37fe9d5c2fd0669 (image=quay.io/ceph/ceph:v19, name=vibrant_knuth, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:12 compute-0 podman[89280]: 2025-12-05 09:47:12.663836629 +0000 UTC m=+0.358101732 container start 5511e28a53fa53a3d324b9dd9de5b74b724f33ef8490b3d7e37fe9d5c2fd0669 (image=quay.io/ceph/ceph:v19, name=vibrant_knuth, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:47:12 compute-0 podman[89280]: 2025-12-05 09:47:12.667542813 +0000 UTC m=+0.361807936 container attach 5511e28a53fa53a3d324b9dd9de5b74b724f33ef8490b3d7e37fe9d5c2fd0669 (image=quay.io/ceph/ceph:v19, name=vibrant_knuth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oiufcm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oiufcm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oiufcm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 05 09:47:12 compute-0 ceph-mon[74418]: 2.0 scrub starts
Dec 05 09:47:12 compute-0 ceph-mon[74418]: 2.0 scrub ok
Dec 05 09:47:12 compute-0 ceph-mon[74418]: 4.b scrub starts
Dec 05 09:47:12 compute-0 ceph-mon[74418]: 4.b scrub ok
Dec 05 09:47:12 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1169778095' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mgrmap e12: compute-0.hvnxai(active, since 3m), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:47:12 compute-0 ceph-mon[74418]: pgmap v128: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:12 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-mon[74418]: 3.12 scrub starts
Dec 05 09:47:12 compute-0 ceph-mon[74418]: 3.12 scrub ok
Dec 05 09:47:12 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:12 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:12 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:12 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oiufcm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.1a( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623859406s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.086326599s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.18( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.244492531s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.706962585s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.19( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.242029190s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704528809s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.1a( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623818398s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.086326599s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.18( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.244451523s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.706962585s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.19( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.241980553s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704528809s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.1b( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623577118s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.086311340s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.1b( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623558044s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.086311340s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.241599083s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704483032s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1b( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.241559029s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704467773s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1b( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.241547585s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704467773s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.19( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623427391s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.086395264s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.241573334s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704483032s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.19( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623410225s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.086395264s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1d( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.241220474s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704429626s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1d( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.241194725s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704429626s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.e( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.241146088s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704421997s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.e( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.241132736s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704421997s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.d( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623548508s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.086929321s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.3( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.240969658s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704414368s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.1( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623517990s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.086997986s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.d( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623467445s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.086929321s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.1( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623506546s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.086997986s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.3( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.240945816s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704414368s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.5( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.240650177s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704353333s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.7( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623299599s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.087013245s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.6( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.240616798s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704345703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.7( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623284340s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.087013245s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.6( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.240594864s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704345703s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.1e( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623157501s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.086952209s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.1e( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.623145103s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.086952209s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.2( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.240335464s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704330444s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.240320206s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704322815s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.2( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.240299225s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704330444s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.240286827s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704322815s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.3( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.628595352s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.092689514s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.3( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.628582001s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.092689514s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1c( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.240181923s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704460144s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1c( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.240169525s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704460144s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.5( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.628280640s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.092582703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.5( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.628265381s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.092582703s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.2( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.628021240s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.092552185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.e( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.628188133s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.092697144s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.e( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.628141403s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.092697144s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.2( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.628007889s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.092552185s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.242059708s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.706848145s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.a( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.242041588s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.706848145s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.8( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627927780s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.092781067s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.5( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.239490509s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704353333s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.8( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627910614s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.092781067s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.d( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.239031792s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704025269s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.9( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.238862991s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.703857422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.d( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.239013672s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704025269s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.9( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.238842964s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.703857422s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.c( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.239569664s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.704025269s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.8( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.238630295s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.703750610s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.c( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.238896370s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.704025269s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.8( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.238617897s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.703750610s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.15( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627512932s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.092842102s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.15( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.238338470s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.703681946s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.15( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.238325119s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.703681946s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.15( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627493858s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.092842102s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.17( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627502441s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.092887878s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.17( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627489090s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.092887878s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.14( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.238242149s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.703750610s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.14( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.238229752s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.703750610s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.13( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.238276482s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.703872681s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.13( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.238255501s) [0] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.703872681s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.a( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627099037s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.092819214s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1f( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.237776756s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 108.703544617s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.12( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627206802s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.092994690s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.12( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627187729s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.092994690s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[4.1f( empty local-lis/les=32/33 n=0 ec=32/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=13.237753868s) [2] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.703544617s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.1c( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627097130s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 104.093048096s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.1c( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627079964s) [2] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.093048096s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[6.a( empty local-lis/les=34/36 n=0 ec=34/18 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=8.627081871s) [0] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.092819214s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[8.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[5.14( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[5.17( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[2.19( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[2.e( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[5.c( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[5.6( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[5.a( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[2.1( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[2.4( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[5.19( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.oiufcm on compute-1
Dec 05 09:47:12 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.oiufcm on compute-1
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[5.1d( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 05 09:47:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[2.6( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[5.5( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[2.9( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[5.1e( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[5.3( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[2.1e( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[2.1f( empty local-lis/les=0/0 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:12 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec 05 09:47:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2033419333' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 05 09:47:13 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.18 deep-scrub starts
Dec 05 09:47:13 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.18 deep-scrub ok
Dec 05 09:47:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 05 09:47:13 compute-0 ceph-mon[74418]: 4.9 scrub starts
Dec 05 09:47:13 compute-0 ceph-mon[74418]: 4.9 scrub ok
Dec 05 09:47:13 compute-0 ceph-mon[74418]: 2.b scrub starts
Dec 05 09:47:13 compute-0 ceph-mon[74418]: 2.b scrub ok
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oiufcm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:47:13 compute-0 ceph-mon[74418]: osdmap e38: 3 total, 3 up, 3 in
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2368140485' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='mgr.14120 192.168.122.100:0/3170405655' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 05 09:47:13 compute-0 ceph-mon[74418]: Deploying daemon rgw.rgw.compute-1.oiufcm on compute-1
Dec 05 09:47:13 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2033419333' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 05 09:47:13 compute-0 ceph-mon[74418]: 5.12 scrub starts
Dec 05 09:47:13 compute-0 ceph-mon[74418]: 5.12 scrub ok
Dec 05 09:47:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 05 09:47:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 05 09:47:13 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2033419333' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[2.19( empty local-lis/les=38/39 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.13( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.10( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[5.17( empty local-lis/les=38/39 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[5.1e( empty local-lis/les=38/39 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[5.14( empty local-lis/les=38/39 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.b( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[5.a( empty local-lis/les=38/39 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[2.e( empty local-lis/les=38/39 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.8( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.f( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[5.c( empty local-lis/les=38/39 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.e( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[5.6( empty local-lis/les=38/39 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[2.1( empty local-lis/les=38/39 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[2.6( empty local-lis/les=38/39 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[5.3( empty local-lis/les=38/39 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.9( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.3( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[2.4( empty local-lis/les=38/39 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[5.5( empty local-lis/les=38/39 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[2.9( empty local-lis/les=38/39 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.18( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.2( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[5.1d( empty local-lis/les=38/39 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[2.1f( empty local-lis/les=38/39 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=38/39 n=0 ec=32/14 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[2.1e( empty local-lis/les=38/39 n=0 ec=30/13 lis/c=30/30 les/c/f=31/31/0 sis=38) [1] r=0 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.1b( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[5.19( empty local-lis/les=38/39 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.1e( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[8.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.4( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 39 pg[7.6( empty local-lis/les=38/39 n=0 ec=35/20 lis/c=35/35 les/c/f=37/37/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:13 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.hvnxai(active, since 3m), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:47:13 compute-0 systemd[1]: libpod-5511e28a53fa53a3d324b9dd9de5b74b724f33ef8490b3d7e37fe9d5c2fd0669.scope: Deactivated successfully.
Dec 05 09:47:13 compute-0 podman[89280]: 2025-12-05 09:47:13.984791412 +0000 UTC m=+1.679056515 container died 5511e28a53fa53a3d324b9dd9de5b74b724f33ef8490b3d7e37fe9d5c2fd0669 (image=quay.io/ceph/ceph:v19, name=vibrant_knuth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 05 09:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4a21622c1c747b38c8004e8302a88c0e23c02d2a1e1b2f5cd99b26de8844f11-merged.mount: Deactivated successfully.
Dec 05 09:47:14 compute-0 sshd-session[75949]: Connection closed by 192.168.122.100 port 55708
Dec 05 09:47:14 compute-0 sshd-session[76005]: Connection closed by 192.168.122.100 port 55732
Dec 05 09:47:14 compute-0 sshd-session[75891]: Connection closed by 192.168.122.100 port 55682
Dec 05 09:47:14 compute-0 sshd-session[75862]: Connection closed by 192.168.122.100 port 55672
Dec 05 09:47:14 compute-0 sshd-session[75976]: Connection closed by 192.168.122.100 port 55722
Dec 05 09:47:14 compute-0 sshd-session[75920]: Connection closed by 192.168.122.100 port 55694
Dec 05 09:47:14 compute-0 sshd-session[75973]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 sshd-session[76002]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 sshd-session[75888]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 sshd-session[75859]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 sshd-session[75946]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 sshd-session[75716]: Connection closed by 192.168.122.100 port 44230
Dec 05 09:47:14 compute-0 systemd[1]: session-33.scope: Consumed 26.004s CPU time.
Dec 05 09:47:14 compute-0 sshd-session[75717]: Connection closed by 192.168.122.100 port 44232
Dec 05 09:47:14 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 sshd-session[75833]: Connection closed by 192.168.122.100 port 44270
Dec 05 09:47:14 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 sshd-session[75804]: Connection closed by 192.168.122.100 port 44268
Dec 05 09:47:14 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 sshd-session[75775]: Connection closed by 192.168.122.100 port 44258
Dec 05 09:47:14 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 sshd-session[75746]: Connection closed by 192.168.122.100 port 44248
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 32 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 podman[89280]: 2025-12-05 09:47:14.0510629 +0000 UTC m=+1.745328003 container remove 5511e28a53fa53a3d324b9dd9de5b74b724f33ef8490b3d7e37fe9d5c2fd0669 (image=quay.io/ceph/ceph:v19, name=vibrant_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 28 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 33 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 31 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 29 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 33.
Dec 05 09:47:14 compute-0 systemd[1]: libpod-conmon-5511e28a53fa53a3d324b9dd9de5b74b724f33ef8490b3d7e37fe9d5c2fd0669.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 32.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 29.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 31.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 28.
Dec 05 09:47:14 compute-0 sshd-session[75693]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 sshd-session[75917]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 sshd-session[75830]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 sshd-session[75710]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 sshd-session[75801]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 sshd-session[75743]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 sshd-session[75772]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:14 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 21 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 30 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 sudo[89277]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 23 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 26 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 25 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 24 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Session 27 logged out. Waiting for processes to exit.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 21.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 30.
Dec 05 09:47:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ignoring --setuser ceph since I am not root
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 23.
Dec 05 09:47:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ignoring --setgroup ceph since I am not root
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 27.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 26.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 25.
Dec 05 09:47:14 compute-0 systemd-logind[789]: Removed session 24.
Dec 05 09:47:14 compute-0 ceph-mgr[74711]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 05 09:47:14 compute-0 ceph-mgr[74711]: pidfile_write: ignore empty --pid-file
Dec 05 09:47:14 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'alerts'
Dec 05 09:47:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:14.232+0000 7f7ac5eee140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:47:14 compute-0 ceph-mgr[74711]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:47:14 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'balancer'
Dec 05 09:47:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:14.316+0000 7f7ac5eee140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:47:14 compute-0 ceph-mgr[74711]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:47:14 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'cephadm'
Dec 05 09:47:14 compute-0 sudo[89380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njyzrvnjmxowpynhkhjjiljdtyseenxs ; /usr/bin/python3'
Dec 05 09:47:14 compute-0 sudo[89380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:14 compute-0 python3[89382]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:14 compute-0 podman[89383]: 2025-12-05 09:47:14.556957175 +0000 UTC m=+0.045504833 container create 0510fff189e08cb10587642ae2289af570997dd395060d9c256a366df3cc0338 (image=quay.io/ceph/ceph:v19, name=intelligent_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 09:47:14 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 05 09:47:14 compute-0 systemd[1]: Started libpod-conmon-0510fff189e08cb10587642ae2289af570997dd395060d9c256a366df3cc0338.scope.
Dec 05 09:47:14 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 05 09:47:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0736b7172fb84e7cd029a0f0ea2e308bb532b9558f39f59810cc4dfdf9f8b852/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0736b7172fb84e7cd029a0f0ea2e308bb532b9558f39f59810cc4dfdf9f8b852/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0736b7172fb84e7cd029a0f0ea2e308bb532b9558f39f59810cc4dfdf9f8b852/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:14 compute-0 podman[89383]: 2025-12-05 09:47:14.538381672 +0000 UTC m=+0.026929330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:14 compute-0 podman[89383]: 2025-12-05 09:47:14.640714605 +0000 UTC m=+0.129262273 container init 0510fff189e08cb10587642ae2289af570997dd395060d9c256a366df3cc0338 (image=quay.io/ceph/ceph:v19, name=intelligent_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:14 compute-0 podman[89383]: 2025-12-05 09:47:14.649514344 +0000 UTC m=+0.138061982 container start 0510fff189e08cb10587642ae2289af570997dd395060d9c256a366df3cc0338 (image=quay.io/ceph/ceph:v19, name=intelligent_chandrasekhar, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 09:47:14 compute-0 podman[89383]: 2025-12-05 09:47:14.653917818 +0000 UTC m=+0.142465466 container attach 0510fff189e08cb10587642ae2289af570997dd395060d9c256a366df3cc0338 (image=quay.io/ceph/ceph:v19, name=intelligent_chandrasekhar, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 05 09:47:14 compute-0 ceph-mon[74418]: 6.18 deep-scrub starts
Dec 05 09:47:14 compute-0 ceph-mon[74418]: 6.18 deep-scrub ok
Dec 05 09:47:14 compute-0 ceph-mon[74418]: 2.1a deep-scrub starts
Dec 05 09:47:14 compute-0 ceph-mon[74418]: 2.1a deep-scrub ok
Dec 05 09:47:14 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 05 09:47:14 compute-0 ceph-mon[74418]: osdmap e39: 3 total, 3 up, 3 in
Dec 05 09:47:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2033419333' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 05 09:47:14 compute-0 ceph-mon[74418]: mgrmap e13: compute-0.hvnxai(active, since 3m), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:47:14 compute-0 ceph-mon[74418]: 3.11 scrub starts
Dec 05 09:47:14 compute-0 ceph-mon[74418]: 3.11 scrub ok
Dec 05 09:47:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 05 09:47:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 05 09:47:14 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 05 09:47:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 05 09:47:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 05 09:47:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 05 09:47:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 05 09:47:15 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'crash'
Dec 05 09:47:15 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 40 pg[9.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:15.230+0000 7f7ac5eee140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:47:15 compute-0 ceph-mgr[74711]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:47:15 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'dashboard'
Dec 05 09:47:15 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 05 09:47:15 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 05 09:47:15 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'devicehealth'
Dec 05 09:47:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:15.898+0000 7f7ac5eee140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:47:15 compute-0 ceph-mgr[74711]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:47:15 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'diskprediction_local'
Dec 05 09:47:15 compute-0 ceph-mon[74418]: 6.1f scrub starts
Dec 05 09:47:15 compute-0 ceph-mon[74418]: 6.1f scrub ok
Dec 05 09:47:15 compute-0 ceph-mon[74418]: 7.1c scrub starts
Dec 05 09:47:15 compute-0 ceph-mon[74418]: 7.1c scrub ok
Dec 05 09:47:15 compute-0 ceph-mon[74418]: osdmap e40: 3 total, 3 up, 3 in
Dec 05 09:47:15 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3149331825' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 05 09:47:15 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 05 09:47:15 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3300078974' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 05 09:47:15 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 05 09:47:15 compute-0 ceph-mon[74418]: 3.e scrub starts
Dec 05 09:47:15 compute-0 ceph-mon[74418]: 3.e scrub ok
Dec 05 09:47:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 05 09:47:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 05 09:47:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 05 09:47:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 05 09:47:15 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 05 09:47:15 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 41 pg[9.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 05 09:47:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 05 09:47:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   from numpy import show_config as show_numpy_config
Dec 05 09:47:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:16.074+0000 7f7ac5eee140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:47:16 compute-0 ceph-mgr[74711]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:47:16 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'influx'
Dec 05 09:47:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:16.158+0000 7f7ac5eee140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:47:16 compute-0 ceph-mgr[74711]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:47:16 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'insights'
Dec 05 09:47:16 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'iostat'
Dec 05 09:47:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:16.310+0000 7f7ac5eee140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:47:16 compute-0 ceph-mgr[74711]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:47:16 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'k8sevents'
Dec 05 09:47:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:47:16 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 05 09:47:16 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 05 09:47:16 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'localpool'
Dec 05 09:47:16 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mds_autoscaler'
Dec 05 09:47:16 compute-0 ceph-mon[74418]: 2.17 deep-scrub starts
Dec 05 09:47:16 compute-0 ceph-mon[74418]: 2.17 deep-scrub ok
Dec 05 09:47:16 compute-0 ceph-mon[74418]: 6.c scrub starts
Dec 05 09:47:16 compute-0 ceph-mon[74418]: 6.c scrub ok
Dec 05 09:47:16 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 05 09:47:16 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 05 09:47:16 compute-0 ceph-mon[74418]: osdmap e41: 3 total, 3 up, 3 in
Dec 05 09:47:16 compute-0 ceph-mon[74418]: 5.8 scrub starts
Dec 05 09:47:16 compute-0 ceph-mon[74418]: 5.8 scrub ok
Dec 05 09:47:16 compute-0 ceph-mon[74418]: 7.12 scrub starts
Dec 05 09:47:16 compute-0 ceph-mon[74418]: 7.12 scrub ok
Dec 05 09:47:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 05 09:47:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 05 09:47:16 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 05 09:47:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 05 09:47:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 05 09:47:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 05 09:47:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mirroring'
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'nfs'
Dec 05 09:47:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:17.332+0000 7f7ac5eee140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'orchestrator'
Dec 05 09:47:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:17.566+0000 7f7ac5eee140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_perf_query'
Dec 05 09:47:17 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 05 09:47:17 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 05 09:47:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:17.646+0000 7f7ac5eee140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_support'
Dec 05 09:47:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:17.721+0000 7f7ac5eee140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'pg_autoscaler'
Dec 05 09:47:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:17.809+0000 7f7ac5eee140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'progress'
Dec 05 09:47:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:17.889+0000 7f7ac5eee140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:47:17 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'prometheus'
Dec 05 09:47:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 05 09:47:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:18.263+0000 7f7ac5eee140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:47:18 compute-0 ceph-mgr[74711]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:47:18 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rbd_support'
Dec 05 09:47:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:18.365+0000 7f7ac5eee140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:47:18 compute-0 ceph-mgr[74711]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:47:18 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'restful'
Dec 05 09:47:18 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 05 09:47:18 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 05 09:47:18 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rgw'
Dec 05 09:47:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 05 09:47:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 05 09:47:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 05 09:47:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 05 09:47:18 compute-0 ceph-mon[74418]: 4.f scrub starts
Dec 05 09:47:18 compute-0 ceph-mon[74418]: 4.f scrub ok
Dec 05 09:47:18 compute-0 ceph-mon[74418]: osdmap e42: 3 total, 3 up, 3 in
Dec 05 09:47:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3300078974' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 05 09:47:18 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 05 09:47:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3149331825' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 05 09:47:18 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 05 09:47:18 compute-0 ceph-mon[74418]: 2.16 deep-scrub starts
Dec 05 09:47:18 compute-0 ceph-mon[74418]: 2.16 deep-scrub ok
Dec 05 09:47:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:18.899+0000 7f7ac5eee140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:47:18 compute-0 ceph-mgr[74711]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:47:18 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rook'
Dec 05 09:47:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:19.498+0000 7f7ac5eee140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:47:19 compute-0 ceph-mgr[74711]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:47:19 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'selftest'
Dec 05 09:47:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:19.576+0000 7f7ac5eee140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:47:19 compute-0 ceph-mgr[74711]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:47:19 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'snap_schedule'
Dec 05 09:47:19 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 05 09:47:19 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 05 09:47:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:19.657+0000 7f7ac5eee140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:47:19 compute-0 ceph-mgr[74711]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:47:19 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'stats'
Dec 05 09:47:19 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'status'
Dec 05 09:47:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 05 09:47:19 compute-0 ceph-mon[74418]: 5.b scrub starts
Dec 05 09:47:19 compute-0 ceph-mon[74418]: 5.b scrub ok
Dec 05 09:47:19 compute-0 ceph-mon[74418]: 6.6 scrub starts
Dec 05 09:47:19 compute-0 ceph-mon[74418]: 6.6 scrub ok
Dec 05 09:47:19 compute-0 ceph-mon[74418]: 5.d scrub starts
Dec 05 09:47:19 compute-0 ceph-mon[74418]: 5.d scrub ok
Dec 05 09:47:19 compute-0 ceph-mon[74418]: 2.14 scrub starts
Dec 05 09:47:19 compute-0 ceph-mon[74418]: 2.14 scrub ok
Dec 05 09:47:19 compute-0 ceph-mon[74418]: 4.4 scrub starts
Dec 05 09:47:19 compute-0 ceph-mon[74418]: 4.4 scrub ok
Dec 05 09:47:19 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 05 09:47:19 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 05 09:47:19 compute-0 ceph-mon[74418]: osdmap e43: 3 total, 3 up, 3 in
Dec 05 09:47:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 05 09:47:19 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 05 09:47:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 44 pg[11.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:47:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 05 09:47:19 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 05 09:47:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 05 09:47:19 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 05 09:47:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:19.800+0000 7f7ac5eee140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:47:19 compute-0 ceph-mgr[74711]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:47:19 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telegraf'
Dec 05 09:47:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:19.867+0000 7f7ac5eee140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:47:19 compute-0 ceph-mgr[74711]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:47:19 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telemetry'
Dec 05 09:47:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:20.020+0000 7f7ac5eee140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'test_orchestrator'
Dec 05 09:47:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:20.246+0000 7f7ac5eee140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'volumes'
Dec 05 09:47:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:20.517+0000 7f7ac5eee140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'zabbix'
Dec 05 09:47:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:20.595+0000 7f7ac5eee140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Active manager daemon compute-0.hvnxai restarted
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hvnxai
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: ms_deliver_dispatch: unhandled message 0x5557cbd5b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 05 09:47:20 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.0 deep-scrub starts
Dec 05 09:47:20 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.0 deep-scrub ok
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr handle_mgr_map Activating!
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr handle_mgr_map I am now activating
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.hvnxai(active, starting, since 0.286439s), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.wewrgp restarted
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.wewrgp started
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 45 pg[11.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: 3.0 scrub starts
Dec 05 09:47:20 compute-0 ceph-mon[74418]: 3.0 scrub ok
Dec 05 09:47:20 compute-0 ceph-mon[74418]: 7.17 scrub starts
Dec 05 09:47:20 compute-0 ceph-mon[74418]: 7.17 scrub ok
Dec 05 09:47:20 compute-0 ceph-mon[74418]: 6.4 scrub starts
Dec 05 09:47:20 compute-0 ceph-mon[74418]: 6.4 scrub ok
Dec 05 09:47:20 compute-0 ceph-mon[74418]: osdmap e44: 3 total, 3 up, 3 in
Dec 05 09:47:20 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3149331825' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3300078974' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: 5.0 scrub starts
Dec 05 09:47:20 compute-0 ceph-mon[74418]: Active manager daemon compute-0.hvnxai restarted
Dec 05 09:47:20 compute-0 ceph-mon[74418]: Activating manager daemon compute-0.hvnxai
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e1 all = 1
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 05 09:47:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Manager daemon compute-0.hvnxai is now available
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: balancer
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [balancer INFO root] Starting
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:47:20
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: cephadm
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: crash
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: dashboard
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: devicehealth
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [dashboard INFO sso] Loading SSO DB version=1
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Starting
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: iostat
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: nfs
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: orchestrator
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: pg_autoscaler
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: progress
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [progress INFO root] Loading...
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f7a4aaa85e0>, <progress.module.GhostEvent object at 0x7f7a4aaa8640>, <progress.module.GhostEvent object at 0x7f7a4aaa8670>, <progress.module.GhostEvent object at 0x7f7a4aaa86a0>, <progress.module.GhostEvent object at 0x7f7a4aaa86d0>, <progress.module.GhostEvent object at 0x7f7a4aaa8700>, <progress.module.GhostEvent object at 0x7f7a4aaa8730>, <progress.module.GhostEvent object at 0x7f7a4aaa8760>, <progress.module.GhostEvent object at 0x7f7a4aaa8790>, <progress.module.GhostEvent object at 0x7f7a4aaa87c0>, <progress.module.GhostEvent object at 0x7f7a4aaa87f0>, <progress.module.GhostEvent object at 0x7f7a4aaa8820>] historic events
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [progress INFO root] Loaded OSDMap, ready.
Dec 05 09:47:20 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] recovery thread starting
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] starting setup
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: rbd_support
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: restful
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [restful INFO root] server_addr: :: server_port: 8003
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: status
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: telemetry
Dec 05 09:47:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"} v 0)
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"}]: dispatch
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [restful WARNING root] server not running: no certificate configured
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] PerfHandler: starting
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TaskHandler: starting
Dec 05 09:47:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"} v 0)
Dec 05 09:47:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"}]: dispatch
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: volumes
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [rbd_support INFO root] setup complete
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 05 09:47:21 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unhddt restarted
Dec 05 09:47:21 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unhddt started
Dec 05 09:47:21 compute-0 sshd-session[89560]: Accepted publickey for ceph-admin from 192.168.122.100 port 54374 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:47:21 compute-0 systemd-logind[789]: New session 34 of user ceph-admin.
Dec 05 09:47:21 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Dec 05 09:47:21 compute-0 sshd-session[89560]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:47:21 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.module] Engine started.
Dec 05 09:47:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:47:21 compute-0 sudo[89575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:47:21 compute-0 sudo[89575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:21 compute-0 sudo[89575]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:21 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.0 deep-scrub starts
Dec 05 09:47:21 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.0 deep-scrub ok
Dec 05 09:47:21 compute-0 sudo[89601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 09:47:21 compute-0 sudo[89601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:22 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 05 09:47:22 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 05 09:47:22 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:47:22] ENGINE Bus STARTING
Dec 05 09:47:22 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:47:22] ENGINE Bus STARTING
Dec 05 09:47:22 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:47:22] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:47:22 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:47:22] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:47:22 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:47:22] ENGINE Client ('192.168.122.100', 37120) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:47:22 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:47:22] ENGINE Client ('192.168.122.100', 37120) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:47:22 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:47:22 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:47:22] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:47:22 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:47:22] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:47:22 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:47:22] ENGINE Bus STARTED
Dec 05 09:47:22 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:47:22] ENGINE Bus STARTED
Dec 05 09:47:23 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 05 09:47:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 05 09:47:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:47:23 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 05 09:47:23 compute-0 podman[89695]: 2025-12-05 09:47:23.87748077 +0000 UTC m=+1.826767308 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:23 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.hvnxai(active, since 3s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:23 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14364 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Dec 05 09:47:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:23 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 05 09:47:23 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 05 09:47:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 05 09:47:23 compute-0 ceph-mon[74418]: 2.11 scrub starts
Dec 05 09:47:23 compute-0 ceph-mon[74418]: 5.0 scrub ok
Dec 05 09:47:23 compute-0 ceph-mon[74418]: 2.11 scrub ok
Dec 05 09:47:23 compute-0 ceph-mon[74418]: 6.0 deep-scrub starts
Dec 05 09:47:23 compute-0 ceph-mon[74418]: 6.0 deep-scrub ok
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 05 09:47:23 compute-0 ceph-mon[74418]: osdmap e45: 3 total, 3 up, 3 in
Dec 05 09:47:23 compute-0 ceph-mon[74418]: mgrmap e14: compute-0.hvnxai(active, starting, since 0.286439s), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:47:23 compute-0 ceph-mon[74418]: Standby manager daemon compute-2.wewrgp restarted
Dec 05 09:47:23 compute-0 ceph-mon[74418]: Standby manager daemon compute-2.wewrgp started
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3300078974' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3149331825' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: Manager daemon compute-0.hvnxai is now available
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"}]: dispatch
Dec 05 09:47:23 compute-0 ceph-mon[74418]: Standby manager daemon compute-1.unhddt restarted
Dec 05 09:47:23 compute-0 ceph-mon[74418]: Standby manager daemon compute-1.unhddt started
Dec 05 09:47:23 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:23 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 05 09:47:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:47:23 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Check health
Dec 05 09:47:23 compute-0 podman[89695]: 2025-12-05 09:47:23.990514875 +0000 UTC m=+1.939801383 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Dec 05 09:47:23 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:23 compute-0 intelligent_chandrasekhar[89399]: Option GRAFANA_API_USERNAME updated
Dec 05 09:47:24 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 systemd[1]: libpod-0510fff189e08cb10587642ae2289af570997dd395060d9c256a366df3cc0338.scope: Deactivated successfully.
Dec 05 09:47:24 compute-0 podman[89383]: 2025-12-05 09:47:24.024168783 +0000 UTC m=+9.512716411 container died 0510fff189e08cb10587642ae2289af570997dd395060d9c256a366df3cc0338 (image=quay.io/ceph/ceph:v19, name=intelligent_chandrasekhar, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0736b7172fb84e7cd029a0f0ea2e308bb532b9558f39f59810cc4dfdf9f8b852-merged.mount: Deactivated successfully.
Dec 05 09:47:24 compute-0 podman[89383]: 2025-12-05 09:47:24.089266777 +0000 UTC m=+9.577814415 container remove 0510fff189e08cb10587642ae2289af570997dd395060d9c256a366df3cc0338 (image=quay.io/ceph/ceph:v19, name=intelligent_chandrasekhar, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 05 09:47:24 compute-0 systemd[1]: libpod-conmon-0510fff189e08cb10587642ae2289af570997dd395060d9c256a366df3cc0338.scope: Deactivated successfully.
Dec 05 09:47:24 compute-0 sudo[89380]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:24 compute-0 sudo[89601]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:47:24 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:47:24 compute-0 sudo[89850]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfxqchwbvvsywbikjodfjtaxvgwggdwf ; /usr/bin/python3'
Dec 05 09:47:24 compute-0 sudo[89850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:47:24 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:47:24 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 sudo[89853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:47:24 compute-0 sudo[89853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:24 compute-0 sudo[89853]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:24 compute-0 sudo[89878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:47:24 compute-0 sudo[89878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:24 compute-0 python3[89852]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Dec 05 09:47:24 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 05 09:47:24 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 05 09:47:24 compute-0 podman[89903]: 2025-12-05 09:47:24.515406155 +0000 UTC m=+0.042480078 container create 0e0d50ae42e4811c4ed04d6413692602fc7ae9f23319772fae05c18c2ddc63a9 (image=quay.io/ceph/ceph:v19, name=vigorous_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 09:47:24 compute-0 systemd[1]: Started libpod-conmon-0e0d50ae42e4811c4ed04d6413692602fc7ae9f23319772fae05c18c2ddc63a9.scope.
Dec 05 09:47:24 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35e582a878b39788631e6fd10034860dd66394be6df84944817b5465be643d0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35e582a878b39788631e6fd10034860dd66394be6df84944817b5465be643d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35e582a878b39788631e6fd10034860dd66394be6df84944817b5465be643d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:24 compute-0 podman[89903]: 2025-12-05 09:47:24.493697064 +0000 UTC m=+0.020770997 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:24 compute-0 podman[89903]: 2025-12-05 09:47:24.605729421 +0000 UTC m=+0.132803364 container init 0e0d50ae42e4811c4ed04d6413692602fc7ae9f23319772fae05c18c2ddc63a9 (image=quay.io/ceph/ceph:v19, name=vigorous_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec 05 09:47:24 compute-0 podman[89903]: 2025-12-05 09:47:24.612855491 +0000 UTC m=+0.139929404 container start 0e0d50ae42e4811c4ed04d6413692602fc7ae9f23319772fae05c18c2ddc63a9 (image=quay.io/ceph/ceph:v19, name=vigorous_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:47:24 compute-0 podman[89903]: 2025-12-05 09:47:24.617129692 +0000 UTC m=+0.144203635 container attach 0e0d50ae42e4811c4ed04d6413692602fc7ae9f23319772fae05c18c2ddc63a9 (image=quay.io/ceph/ceph:v19, name=vigorous_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:24 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.hvnxai(active, since 4s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 7.15 scrub starts
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 7.15 scrub ok
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 4.0 deep-scrub starts
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 4.0 deep-scrub ok
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 3.1a scrub starts
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 3.1a scrub ok
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 4.7 scrub starts
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 4.7 scrub ok
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 3.1b scrub starts
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 2.3 scrub starts
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 3.1b scrub ok
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 2.3 scrub ok
Dec 05 09:47:24 compute-0 ceph-mon[74418]: [05/Dec/2025:09:47:22] ENGINE Bus STARTING
Dec 05 09:47:24 compute-0 ceph-mon[74418]: [05/Dec/2025:09:47:22] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:47:24 compute-0 ceph-mon[74418]: [05/Dec/2025:09:47:22] ENGINE Client ('192.168.122.100', 37120) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:47:24 compute-0 ceph-mon[74418]: [05/Dec/2025:09:47:22] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:47:24 compute-0 ceph-mon[74418]: [05/Dec/2025:09:47:22] ENGINE Bus STARTED
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 6.f scrub starts
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 5.1a scrub starts
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 5.1a scrub ok
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 7.0 scrub starts
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 6.f scrub ok
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 7.0 scrub ok
Dec 05 09:47:24 compute-0 ceph-mon[74418]: mgrmap e15: compute-0.hvnxai(active, since 3s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:24 compute-0 ceph-mon[74418]: from='client.14364 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:24 compute-0 ceph-mon[74418]: pgmap v3: 197 pgs: 197 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:24 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-1.oiufcm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 05 09:47:24 compute-0 ceph-mon[74418]: from='client.? ' entity='client.rgw.rgw.compute-2.gzawrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 05 09:47:24 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 ceph-mon[74418]: osdmap e46: 3 total, 3 up, 3 in
Dec 05 09:47:24 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 2.2 deep-scrub starts
Dec 05 09:47:24 compute-0 ceph-mon[74418]: 2.2 deep-scrub ok
Dec 05 09:47:25 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14394 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Dec 05 09:47:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:25 compute-0 vigorous_gagarin[89919]: Option GRAFANA_API_PASSWORD updated
Dec 05 09:47:25 compute-0 systemd[1]: libpod-0e0d50ae42e4811c4ed04d6413692602fc7ae9f23319772fae05c18c2ddc63a9.scope: Deactivated successfully.
Dec 05 09:47:25 compute-0 podman[89903]: 2025-12-05 09:47:25.043660331 +0000 UTC m=+0.570734264 container died 0e0d50ae42e4811c4ed04d6413692602fc7ae9f23319772fae05c18c2ddc63a9 (image=quay.io/ceph/ceph:v19, name=vigorous_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d35e582a878b39788631e6fd10034860dd66394be6df84944817b5465be643d0-merged.mount: Deactivated successfully.
Dec 05 09:47:25 compute-0 podman[89903]: 2025-12-05 09:47:25.091785907 +0000 UTC m=+0.618859820 container remove 0e0d50ae42e4811c4ed04d6413692602fc7ae9f23319772fae05c18c2ddc63a9 (image=quay.io/ceph/ceph:v19, name=vigorous_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 05 09:47:25 compute-0 systemd[1]: libpod-conmon-0e0d50ae42e4811c4ed04d6413692602fc7ae9f23319772fae05c18c2ddc63a9.scope: Deactivated successfully.
Dec 05 09:47:25 compute-0 sudo[89850]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:47:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:47:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 05 09:47:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:25 compute-0 sudo[89994]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmossxuqnjaqbeynuwlonuqjjivqnyak ; /usr/bin/python3'
Dec 05 09:47:25 compute-0 sudo[89994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:25 compute-0 python3[89996]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:25 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 05 09:47:25 compute-0 podman[89997]: 2025-12-05 09:47:25.53012335 +0000 UTC m=+0.054952610 container create ce9aeba9ab9a7fb0241f37e871b3821542ae7de9d7e3d2d3482c733c9f9a01af (image=quay.io/ceph/ceph:v19, name=flamboyant_hofstadter, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 09:47:25 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 05 09:47:25 compute-0 systemd[1]: Started libpod-conmon-ce9aeba9ab9a7fb0241f37e871b3821542ae7de9d7e3d2d3482c733c9f9a01af.scope.
Dec 05 09:47:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff916daf4955cb2f8d3e50bc12b936de3b7bc8797025efa562a690e8c2fe6bb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff916daf4955cb2f8d3e50bc12b936de3b7bc8797025efa562a690e8c2fe6bb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff916daf4955cb2f8d3e50bc12b936de3b7bc8797025efa562a690e8c2fe6bb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:25 compute-0 podman[89997]: 2025-12-05 09:47:25.512336478 +0000 UTC m=+0.037165758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:25 compute-0 podman[89997]: 2025-12-05 09:47:25.609701202 +0000 UTC m=+0.134530482 container init ce9aeba9ab9a7fb0241f37e871b3821542ae7de9d7e3d2d3482c733c9f9a01af (image=quay.io/ceph/ceph:v19, name=flamboyant_hofstadter, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 09:47:25 compute-0 podman[89997]: 2025-12-05 09:47:25.616452062 +0000 UTC m=+0.141281332 container start ce9aeba9ab9a7fb0241f37e871b3821542ae7de9d7e3d2d3482c733c9f9a01af (image=quay.io/ceph/ceph:v19, name=flamboyant_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 09:47:25 compute-0 podman[89997]: 2025-12-05 09:47:25.620559698 +0000 UTC m=+0.145388958 container attach ce9aeba9ab9a7fb0241f37e871b3821542ae7de9d7e3d2d3482c733c9f9a01af (image=quay.io/ceph/ceph:v19, name=flamboyant_hofstadter, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 09:47:25 compute-0 sudo[89878]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:25 compute-0 sudo[90051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:47:25 compute-0 sudo[90051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:25 compute-0 sudo[90051]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:25 compute-0 sudo[90076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 05 09:47:25 compute-0 sudo[90076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:25 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14400 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:26 compute-0 sudo[90076]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:47:26 compute-0 ceph-mon[74418]: 6.9 scrub starts
Dec 05 09:47:26 compute-0 ceph-mon[74418]: 6.9 scrub ok
Dec 05 09:47:26 compute-0 ceph-mon[74418]: 3.8 scrub starts
Dec 05 09:47:26 compute-0 ceph-mon[74418]: 3.8 scrub ok
Dec 05 09:47:26 compute-0 ceph-mon[74418]: pgmap v5: 197 pgs: 197 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mgrmap e16: compute-0.hvnxai(active, since 4s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:26 compute-0 ceph-mon[74418]: from='client.14394 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:26 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:26 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:26 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:26 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:26 compute-0 ceph-mon[74418]: 5.e scrub starts
Dec 05 09:47:26 compute-0 ceph-mon[74418]: 5.e scrub ok
Dec 05 09:47:26 compute-0 ceph-mon[74418]: 7.7 scrub starts
Dec 05 09:47:26 compute-0 ceph-mon[74418]: 7.7 scrub ok
Dec 05 09:47:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:26 compute-0 flamboyant_hofstadter[90016]: Option ALERTMANAGER_API_HOST updated
Dec 05 09:47:26 compute-0 systemd[1]: libpod-ce9aeba9ab9a7fb0241f37e871b3821542ae7de9d7e3d2d3482c733c9f9a01af.scope: Deactivated successfully.
Dec 05 09:47:26 compute-0 podman[89997]: 2025-12-05 09:47:26.389961338 +0000 UTC m=+0.914790608 container died ce9aeba9ab9a7fb0241f37e871b3821542ae7de9d7e3d2d3482c733c9f9a01af (image=quay.io/ceph/ceph:v19, name=flamboyant_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:47:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ff916daf4955cb2f8d3e50bc12b936de3b7bc8797025efa562a690e8c2fe6bb-merged.mount: Deactivated successfully.
Dec 05 09:47:26 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec 05 09:47:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:26 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 05 09:47:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:47:26 compute-0 podman[89997]: 2025-12-05 09:47:26.555183214 +0000 UTC m=+1.080012474 container remove ce9aeba9ab9a7fb0241f37e871b3821542ae7de9d7e3d2d3482c733c9f9a01af (image=quay.io/ceph/ceph:v19, name=flamboyant_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 09:47:26 compute-0 systemd[1]: libpod-conmon-ce9aeba9ab9a7fb0241f37e871b3821542ae7de9d7e3d2d3482c733c9f9a01af.scope: Deactivated successfully.
Dec 05 09:47:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 05 09:47:26 compute-0 ceph-mgr[74711]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Dec 05 09:47:26 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Dec 05 09:47:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:26 compute-0 sudo[89994]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 05 09:47:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 05 09:47:26 compute-0 ceph-mgr[74711]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 05 09:47:26 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:47:26 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:47:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:47:26 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:47:26 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:47:26 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:47:26 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:47:26 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:47:26 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:47:26 compute-0 sudo[90136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:47:26 compute-0 sudo[90136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:26 compute-0 sudo[90136]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:26 compute-0 sudo[90190]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uidmoyjllbgqotpdfzkzanovtwpfsnob ; /usr/bin/python3'
Dec 05 09:47:26 compute-0 sudo[90190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:26 compute-0 sudo[90181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph
Dec 05 09:47:26 compute-0 sudo[90181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:26 compute-0 sudo[90181]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:26 compute-0 sudo[90212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:26 compute-0 sudo[90212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:26 compute-0 sudo[90212]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:26 compute-0 python3[90208]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:26 compute-0 sudo[90237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:47:26 compute-0 sudo[90237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:26 compute-0 sudo[90237]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:26 compute-0 podman[90261]: 2025-12-05 09:47:26.8978142 +0000 UTC m=+0.038983380 container create 591cd1200b9db9012e919f53357d0ed22ce7766e466816859838b422c458fd2a (image=quay.io/ceph/ceph:v19, name=boring_greider, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:26 compute-0 sudo[90268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:26 compute-0 sudo[90268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:26 compute-0 sudo[90268]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:26 compute-0 systemd[1]: Started libpod-conmon-591cd1200b9db9012e919f53357d0ed22ce7766e466816859838b422c458fd2a.scope.
Dec 05 09:47:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd00f06847a00724c312a60bafe10125fb699171f18f8720915f39262f7293d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd00f06847a00724c312a60bafe10125fb699171f18f8720915f39262f7293d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd00f06847a00724c312a60bafe10125fb699171f18f8720915f39262f7293d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:26 compute-0 podman[90261]: 2025-12-05 09:47:26.880012728 +0000 UTC m=+0.021181938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:26 compute-0 podman[90261]: 2025-12-05 09:47:26.984223685 +0000 UTC m=+0.125392885 container init 591cd1200b9db9012e919f53357d0ed22ce7766e466816859838b422c458fd2a (image=quay.io/ceph/ceph:v19, name=boring_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 09:47:26 compute-0 podman[90261]: 2025-12-05 09:47:26.994120563 +0000 UTC m=+0.135289743 container start 591cd1200b9db9012e919f53357d0ed22ce7766e466816859838b422c458fd2a (image=quay.io/ceph/ceph:v19, name=boring_greider, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:47:26 compute-0 podman[90261]: 2025-12-05 09:47:26.9979014 +0000 UTC m=+0.139070600 container attach 591cd1200b9db9012e919f53357d0ed22ce7766e466816859838b422c458fd2a (image=quay.io/ceph/ceph:v19, name=boring_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 09:47:27 compute-0 sudo[90329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:27 compute-0 sudo[90329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90329]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 sudo[90355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:27 compute-0 sudo[90355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90355]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 sudo[90382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:47:27 compute-0 sudo[90382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90382]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:27 compute-0 sudo[90424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:47:27 compute-0 sudo[90424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90424]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:27 compute-0 sudo[90449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:47:27 compute-0 sudo[90449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90449]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 ceph-mon[74418]: 6.b scrub starts
Dec 05 09:47:27 compute-0 ceph-mon[74418]: 6.b scrub ok
Dec 05 09:47:27 compute-0 ceph-mon[74418]: from='client.14400 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:27 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:27 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:27 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:27 compute-0 ceph-mon[74418]: 3.9 scrub starts
Dec 05 09:47:27 compute-0 ceph-mon[74418]: 3.9 scrub ok
Dec 05 09:47:27 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:27 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:27 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:27 compute-0 ceph-mon[74418]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec 05 09:47:27 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:27 compute-0 ceph-mon[74418]: 7.1 scrub starts
Dec 05 09:47:27 compute-0 ceph-mon[74418]: 7.1 scrub ok
Dec 05 09:47:27 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 05 09:47:27 compute-0 ceph-mon[74418]: Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 05 09:47:27 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:27 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:47:27 compute-0 ceph-mon[74418]: Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:47:27 compute-0 ceph-mon[74418]: Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:47:27 compute-0 ceph-mon[74418]: Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:47:27 compute-0 ceph-mon[74418]: pgmap v6: 197 pgs: 197 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:27 compute-0 sudo[90474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:47:27 compute-0 sudo[90474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90474]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14406 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:27 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:27 compute-0 boring_greider[90307]: Option PROMETHEUS_API_HOST updated
Dec 05 09:47:27 compute-0 sudo[90499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:47:27 compute-0 sudo[90499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90499]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 systemd[1]: libpod-591cd1200b9db9012e919f53357d0ed22ce7766e466816859838b422c458fd2a.scope: Deactivated successfully.
Dec 05 09:47:27 compute-0 podman[90261]: 2025-12-05 09:47:27.388525888 +0000 UTC m=+0.529695078 container died 591cd1200b9db9012e919f53357d0ed22ce7766e466816859838b422c458fd2a (image=quay.io/ceph/ceph:v19, name=boring_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 09:47:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fd00f06847a00724c312a60bafe10125fb699171f18f8720915f39262f7293d-merged.mount: Deactivated successfully.
Dec 05 09:47:27 compute-0 podman[90261]: 2025-12-05 09:47:27.434077741 +0000 UTC m=+0.575246921 container remove 591cd1200b9db9012e919f53357d0ed22ce7766e466816859838b422c458fd2a (image=quay.io/ceph/ceph:v19, name=boring_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 05 09:47:27 compute-0 sudo[90527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:47:27 compute-0 systemd[1]: libpod-conmon-591cd1200b9db9012e919f53357d0ed22ce7766e466816859838b422c458fd2a.scope: Deactivated successfully.
Dec 05 09:47:27 compute-0 sudo[90527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90527]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 sudo[90190]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec 05 09:47:27 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec 05 09:47:27 compute-0 sudo[90585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:47:27 compute-0 sudo[90585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90585]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 sudo[90610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:47:27 compute-0 sudo[90610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90610]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 sudo[90657]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhvaawsntlroluekbzslbrjbtzsemqrl ; /usr/bin/python3'
Dec 05 09:47:27 compute-0 sudo[90657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:27 compute-0 sudo[90660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:27 compute-0 sudo[90660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90660]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:47:27 compute-0 sudo[90686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:47:27 compute-0 sudo[90686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90686]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 python3[90661]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:27 compute-0 sudo[90711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph
Dec 05 09:47:27 compute-0 sudo[90711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:47:27 compute-0 sudo[90711]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 sudo[90749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:47:27 compute-0 sudo[90749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90749]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 podman[90727]: 2025-12-05 09:47:27.787479759 +0000 UTC m=+0.025285193 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:27 compute-0 sudo[90774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:47:27 compute-0 sudo[90774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:27 compute-0 sudo[90774]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:47:27 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:47:28 compute-0 sudo[90799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:47:28 compute-0 sudo[90799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 sudo[90799]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 sudo[90847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:47:28 compute-0 sudo[90847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 sudo[90847]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 sudo[90872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:47:28 compute-0 sudo[90872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 sudo[90872]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 sudo[90897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 09:47:28 compute-0 sudo[90897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 sudo[90897]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:47:28 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:47:28 compute-0 sudo[90922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:47:28 compute-0 sudo[90922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 sudo[90922]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:47:28 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:47:28 compute-0 sudo[90947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:47:28 compute-0 sudo[90947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 sudo[90947]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 podman[90727]: 2025-12-05 09:47:28.480310503 +0000 UTC m=+0.718115897 container create 0c6cd73418d1bae84db6092af1ca01d91816d38ec870c3026ca6fca980bd4a8b (image=quay.io/ceph/ceph:v19, name=beautiful_carver, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 09:47:28 compute-0 sudo[90972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:47:28 compute-0 sudo[90972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 05 09:47:28 compute-0 sudo[90972]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 05 09:47:28 compute-0 sudo[90997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:47:28 compute-0 sudo[90997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 sudo[90997]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:47:28 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:47:28 compute-0 sudo[91022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:47:28 compute-0 sudo[91022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 sudo[91022]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 sudo[91070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:47:28 compute-0 sudo[91070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 sudo[91070]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 sudo[91095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:47:28 compute-0 sudo[91095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 sudo[91095]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v7: 197 pgs: 197 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:28 compute-0 sudo[91120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:47:28 compute-0 sudo[91120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:28 compute-0 sudo[91120]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:47:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:47:29 compute-0 ceph-mon[74418]: 6.14 scrub starts
Dec 05 09:47:29 compute-0 ceph-mon[74418]: 6.14 scrub ok
Dec 05 09:47:29 compute-0 ceph-mon[74418]: Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:29 compute-0 ceph-mon[74418]: Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:29 compute-0 ceph-mon[74418]: from='client.14406 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:29 compute-0 ceph-mon[74418]: Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:29 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:29 compute-0 ceph-mon[74418]: 3.1d scrub starts
Dec 05 09:47:29 compute-0 ceph-mon[74418]: 3.1d scrub ok
Dec 05 09:47:29 compute-0 ceph-mon[74418]: 7.d scrub starts
Dec 05 09:47:29 compute-0 ceph-mon[74418]: 7.d scrub ok
Dec 05 09:47:29 compute-0 ceph-mon[74418]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:47:29 compute-0 ceph-mon[74418]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:47:29 compute-0 ceph-mon[74418]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:47:29 compute-0 systemd[1]: Started libpod-conmon-0c6cd73418d1bae84db6092af1ca01d91816d38ec870c3026ca6fca980bd4a8b.scope.
Dec 05 09:47:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:47:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b273afbd6fea7c3144ec1bbeb3e8bbf0b791acc7472ef5917865c1b05f018ba2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b273afbd6fea7c3144ec1bbeb3e8bbf0b791acc7472ef5917865c1b05f018ba2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b273afbd6fea7c3144ec1bbeb3e8bbf0b791acc7472ef5917865c1b05f018ba2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:29 compute-0 podman[90727]: 2025-12-05 09:47:29.287939511 +0000 UTC m=+1.525744885 container init 0c6cd73418d1bae84db6092af1ca01d91816d38ec870c3026ca6fca980bd4a8b (image=quay.io/ceph/ceph:v19, name=beautiful_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:47:29 compute-0 podman[90727]: 2025-12-05 09:47:29.292937152 +0000 UTC m=+1.530742516 container start 0c6cd73418d1bae84db6092af1ca01d91816d38ec870c3026ca6fca980bd4a8b (image=quay.io/ceph/ceph:v19, name=beautiful_carver, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 09:47:29 compute-0 podman[90727]: 2025-12-05 09:47:29.296810821 +0000 UTC m=+1.534616175 container attach 0c6cd73418d1bae84db6092af1ca01d91816d38ec870c3026ca6fca980bd4a8b (image=quay.io/ceph/ceph:v19, name=beautiful_carver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:47:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:47:29 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec 05 09:47:29 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec 05 09:47:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:47:30 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:47:30 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:30 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev b3e29715-8b65-4d4f-8142-d4847e451eae (Updating node-exporter deployment (+3 -> 3))
Dec 05 09:47:30 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Dec 05 09:47:30 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Dec 05 09:47:30 compute-0 sudo[91170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:47:30 compute-0 sudo[91170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:30 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14412 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec 05 09:47:30 compute-0 sudo[91170]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:30 compute-0 sudo[91196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:47:30 compute-0 sudo[91196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:30 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:30 compute-0 beautiful_carver[91147]: Option GRAFANA_API_URL updated
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 6.16 scrub starts
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 6.16 scrub ok
Dec 05 09:47:30 compute-0 ceph-mon[74418]: Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:47:30 compute-0 ceph-mon[74418]: Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 5.4 scrub starts
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 5.4 scrub ok
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 6.11 scrub starts
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 6.11 scrub ok
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 7.c scrub starts
Dec 05 09:47:30 compute-0 ceph-mon[74418]: Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 7.c scrub ok
Dec 05 09:47:30 compute-0 ceph-mon[74418]: pgmap v7: 197 pgs: 197 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:30 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:30 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:30 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:30 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 7.5 deep-scrub starts
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 7.5 deep-scrub ok
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 7.19 scrub starts
Dec 05 09:47:30 compute-0 ceph-mon[74418]: 7.19 scrub ok
Dec 05 09:47:30 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:30 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:30 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:30 compute-0 ceph-mon[74418]: Deploying daemon node-exporter.compute-0 on compute-0
Dec 05 09:47:30 compute-0 systemd[1]: libpod-0c6cd73418d1bae84db6092af1ca01d91816d38ec870c3026ca6fca980bd4a8b.scope: Deactivated successfully.
Dec 05 09:47:30 compute-0 podman[90727]: 2025-12-05 09:47:30.421999278 +0000 UTC m=+2.659804652 container died 0c6cd73418d1bae84db6092af1ca01d91816d38ec870c3026ca6fca980bd4a8b (image=quay.io/ceph/ceph:v19, name=beautiful_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 09:47:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b273afbd6fea7c3144ec1bbeb3e8bbf0b791acc7472ef5917865c1b05f018ba2-merged.mount: Deactivated successfully.
Dec 05 09:47:30 compute-0 podman[90727]: 2025-12-05 09:47:30.460222364 +0000 UTC m=+2.698027718 container remove 0c6cd73418d1bae84db6092af1ca01d91816d38ec870c3026ca6fca980bd4a8b (image=quay.io/ceph/ceph:v19, name=beautiful_carver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:47:30 compute-0 systemd[1]: libpod-conmon-0c6cd73418d1bae84db6092af1ca01d91816d38ec870c3026ca6fca980bd4a8b.scope: Deactivated successfully.
Dec 05 09:47:30 compute-0 sudo[90657]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:30 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec 05 09:47:30 compute-0 systemd[1]: Reloading.
Dec 05 09:47:30 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec 05 09:47:30 compute-0 systemd-rc-local-generator[91313]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:47:30 compute-0 systemd-sysv-generator[91320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:47:30 compute-0 sudo[91331]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bekhmfcwwcebeqaosyhcrbvwfrumujmz ; /usr/bin/python3'
Dec 05 09:47:30 compute-0 sudo[91331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:30 compute-0 systemd[1]: Reloading.
Dec 05 09:47:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v8: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.6 KiB/s wr, 207 op/s
Dec 05 09:47:30 compute-0 systemd-rc-local-generator[91364]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:47:30 compute-0 systemd-sysv-generator[91367]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:47:30 compute-0 python3[91335]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:31 compute-0 podman[91373]: 2025-12-05 09:47:30.97993615 +0000 UTC m=+0.020103518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:31 compute-0 podman[91373]: 2025-12-05 09:47:31.184446113 +0000 UTC m=+0.224613451 container create ce9a9425abfa2c0426fdc15fb8e78ddd8d7911e608ca6a8b4b429f1ca341f82a (image=quay.io/ceph/ceph:v19, name=great_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:47:31 compute-0 systemd[1]: Started libpod-conmon-ce9a9425abfa2c0426fdc15fb8e78ddd8d7911e608ca6a8b4b429f1ca341f82a.scope.
Dec 05 09:47:31 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:47:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae0b928ab0e883d3d8b600acdd1ca5a01c418b3af37669e068f20af823a483c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae0b928ab0e883d3d8b600acdd1ca5a01c418b3af37669e068f20af823a483c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae0b928ab0e883d3d8b600acdd1ca5a01c418b3af37669e068f20af823a483c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:31 compute-0 podman[91373]: 2025-12-05 09:47:31.386228539 +0000 UTC m=+0.426395947 container init ce9a9425abfa2c0426fdc15fb8e78ddd8d7911e608ca6a8b4b429f1ca341f82a (image=quay.io/ceph/ceph:v19, name=great_stonebraker, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:31 compute-0 podman[91373]: 2025-12-05 09:47:31.397861166 +0000 UTC m=+0.438028504 container start ce9a9425abfa2c0426fdc15fb8e78ddd8d7911e608ca6a8b4b429f1ca341f82a (image=quay.io/ceph/ceph:v19, name=great_stonebraker, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:31 compute-0 podman[91373]: 2025-12-05 09:47:31.432203684 +0000 UTC m=+0.472371112 container attach ce9a9425abfa2c0426fdc15fb8e78ddd8d7911e608ca6a8b4b429f1ca341f82a (image=quay.io/ceph/ceph:v19, name=great_stonebraker, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:47:31 compute-0 ceph-mon[74418]: 6.10 scrub starts
Dec 05 09:47:31 compute-0 ceph-mon[74418]: 6.10 scrub ok
Dec 05 09:47:31 compute-0 ceph-mon[74418]: from='client.14412 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:31 compute-0 ceph-mon[74418]: 7.1d scrub starts
Dec 05 09:47:31 compute-0 ceph-mon[74418]: 7.1d scrub ok
Dec 05 09:47:31 compute-0 ceph-mon[74418]: from='mgr.14343 192.168.122.100:0/2461694175' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:31 compute-0 ceph-mon[74418]: 7.1a scrub starts
Dec 05 09:47:31 compute-0 ceph-mon[74418]: 7.1a scrub ok
Dec 05 09:47:31 compute-0 ceph-mon[74418]: pgmap v8: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.6 KiB/s wr, 207 op/s
Dec 05 09:47:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:47:31 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec 05 09:47:31 compute-0 bash[91462]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec 05 09:47:31 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec 05 09:47:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec 05 09:47:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3714538630' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 05 09:47:32 compute-0 bash[91462]: Getting image source signatures
Dec 05 09:47:32 compute-0 bash[91462]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec 05 09:47:32 compute-0 bash[91462]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec 05 09:47:32 compute-0 bash[91462]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec 05 09:47:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3714538630' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  1: '-n'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  2: 'mgr.compute-0.hvnxai'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  3: '-f'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  4: '--setuser'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  5: 'ceph'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  6: '--setgroup'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  7: 'ceph'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  8: '--default-log-to-file=false'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  9: '--default-log-to-journald=true'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr respawn  exe_path /proc/self/exe
Dec 05 09:47:32 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.hvnxai(active, since 11s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:32 compute-0 systemd[1]: libpod-ce9a9425abfa2c0426fdc15fb8e78ddd8d7911e608ca6a8b4b429f1ca341f82a.scope: Deactivated successfully.
Dec 05 09:47:32 compute-0 podman[91373]: 2025-12-05 09:47:32.51548207 +0000 UTC m=+1.555649428 container died ce9a9425abfa2c0426fdc15fb8e78ddd8d7911e608ca6a8b4b429f1ca341f82a (image=quay.io/ceph/ceph:v19, name=great_stonebraker, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:47:32 compute-0 ceph-mon[74418]: 6.13 scrub starts
Dec 05 09:47:32 compute-0 ceph-mon[74418]: 6.13 scrub ok
Dec 05 09:47:32 compute-0 ceph-mon[74418]: 2.1c scrub starts
Dec 05 09:47:32 compute-0 ceph-mon[74418]: 2.1c scrub ok
Dec 05 09:47:32 compute-0 ceph-mon[74418]: 5.1f scrub starts
Dec 05 09:47:32 compute-0 ceph-mon[74418]: 5.1f scrub ok
Dec 05 09:47:32 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3714538630' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 05 09:47:32 compute-0 sshd-session[89574]: Connection closed by 192.168.122.100 port 54374
Dec 05 09:47:32 compute-0 sshd-session[89560]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:47:32 compute-0 systemd-logind[789]: Session 34 logged out. Waiting for processes to exit.
Dec 05 09:47:32 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 05 09:47:32 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ignoring --setuser ceph since I am not root
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ignoring --setgroup ceph since I am not root
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: pidfile_write: ignore empty --pid-file
Dec 05 09:47:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ae0b928ab0e883d3d8b600acdd1ca5a01c418b3af37669e068f20af823a483c-merged.mount: Deactivated successfully.
Dec 05 09:47:32 compute-0 bash[91462]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec 05 09:47:32 compute-0 bash[91462]: Writing manifest to image destination
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'alerts'
Dec 05 09:47:32 compute-0 podman[91373]: 2025-12-05 09:47:32.684549244 +0000 UTC m=+1.724716582 container remove ce9a9425abfa2c0426fdc15fb8e78ddd8d7911e608ca6a8b4b429f1ca341f82a (image=quay.io/ceph/ceph:v19, name=great_stonebraker, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:47:32 compute-0 systemd[1]: libpod-conmon-ce9a9425abfa2c0426fdc15fb8e78ddd8d7911e608ca6a8b4b429f1ca341f82a.scope: Deactivated successfully.
Dec 05 09:47:32 compute-0 podman[91462]: 2025-12-05 09:47:32.701787481 +0000 UTC m=+1.142782394 container create dc2521f476ac6cd8b02d9a95c2d20034aa296ae30c8ddb7ef7e3087931bef2ec (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:47:32 compute-0 sudo[91331]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a9403cd47d44ce6c08dcf5c8aed76515d84e65b9c33beda1723d39d02c97d0/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:32 compute-0 podman[91462]: 2025-12-05 09:47:32.751741327 +0000 UTC m=+1.192736240 container init dc2521f476ac6cd8b02d9a95c2d20034aa296ae30c8ddb7ef7e3087931bef2ec (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:47:32 compute-0 podman[91462]: 2025-12-05 09:47:32.756573604 +0000 UTC m=+1.197568537 container start dc2521f476ac6cd8b02d9a95c2d20034aa296ae30c8ddb7ef7e3087931bef2ec (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:47:32 compute-0 bash[91462]: dc2521f476ac6cd8b02d9a95c2d20034aa296ae30c8ddb7ef7e3087931bef2ec
Dec 05 09:47:32 compute-0 podman[91462]: 2025-12-05 09:47:32.686764107 +0000 UTC m=+1.127759040 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.767Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.767Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.768Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.768Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.768Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.768Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 05 09:47:32 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=arp
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=bcache
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=bonding
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=cpu
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=dmi
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=edac
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=entropy
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=filefd
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=hwmon
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=netclass
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=netdev
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=netstat
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=nfs
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=nvme
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=os
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=pressure
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=rapl
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=selinux
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=softnet
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=stat
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=textfile
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=time
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=uname
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=xfs
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.769Z caller=node_exporter.go:117 level=info collector=zfs
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.770Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[91566]: ts=2025-12-05T09:47:32.770Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:32.775+0000 7fa0d75e7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'balancer'
Dec 05 09:47:32 compute-0 sudo[91196]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:32 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Dec 05 09:47:32 compute-0 systemd[1]: session-34.scope: Consumed 5.009s CPU time.
Dec 05 09:47:32 compute-0 systemd-logind[789]: Removed session 34.
Dec 05 09:47:32 compute-0 sudo[91598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyxnpmdcjwosyxqycoynxabpqiokonkm ; /usr/bin/python3'
Dec 05 09:47:32 compute-0 sudo[91598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:32.865+0000 7fa0d75e7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:47:32 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'cephadm'
Dec 05 09:47:33 compute-0 python3[91600]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:33 compute-0 podman[91601]: 2025-12-05 09:47:33.061486216 +0000 UTC m=+0.028630947 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:33 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.18 deep-scrub starts
Dec 05 09:47:33 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.18 deep-scrub ok
Dec 05 09:47:33 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'crash'
Dec 05 09:47:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:33.778+0000 7fa0d75e7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:47:33 compute-0 ceph-mgr[74711]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:47:33 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'dashboard'
Dec 05 09:47:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'devicehealth'
Dec 05 09:47:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:34.523+0000 7fa0d75e7140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:47:34 compute-0 ceph-mgr[74711]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:47:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'diskprediction_local'
Dec 05 09:47:34 compute-0 podman[91601]: 2025-12-05 09:47:34.532094947 +0000 UTC m=+1.499239708 container create d3e3d3ad8f4e1f8ff72f2adf9f1fd85e348a03a11c7deba56af90abe2ad2c3aa (image=quay.io/ceph/ceph:v19, name=relaxed_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:47:34 compute-0 ceph-mon[74418]: 6.1d scrub starts
Dec 05 09:47:34 compute-0 ceph-mon[74418]: 6.1d scrub ok
Dec 05 09:47:34 compute-0 ceph-mon[74418]: 2.c scrub starts
Dec 05 09:47:34 compute-0 ceph-mon[74418]: 2.c scrub ok
Dec 05 09:47:34 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3714538630' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 05 09:47:34 compute-0 ceph-mon[74418]: mgrmap e17: compute-0.hvnxai(active, since 11s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:34 compute-0 ceph-mon[74418]: 4.1e scrub starts
Dec 05 09:47:34 compute-0 ceph-mon[74418]: 4.1e scrub ok
Dec 05 09:47:34 compute-0 ceph-mon[74418]: 4.13 deep-scrub starts
Dec 05 09:47:34 compute-0 ceph-mon[74418]: 4.13 deep-scrub ok
Dec 05 09:47:34 compute-0 systemd[1]: Started libpod-conmon-d3e3d3ad8f4e1f8ff72f2adf9f1fd85e348a03a11c7deba56af90abe2ad2c3aa.scope.
Dec 05 09:47:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0e24fa37e820ad01f7201d0f8a19f547fb1be8514afef8d856fa356c21e82d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0e24fa37e820ad01f7201d0f8a19f547fb1be8514afef8d856fa356c21e82d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0e24fa37e820ad01f7201d0f8a19f547fb1be8514afef8d856fa356c21e82d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:34 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Dec 05 09:47:34 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Dec 05 09:47:34 compute-0 podman[91601]: 2025-12-05 09:47:34.708057386 +0000 UTC m=+1.675202117 container init d3e3d3ad8f4e1f8ff72f2adf9f1fd85e348a03a11c7deba56af90abe2ad2c3aa (image=quay.io/ceph/ceph:v19, name=relaxed_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 09:47:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 05 09:47:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 05 09:47:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   from numpy import show_config as show_numpy_config
Dec 05 09:47:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:34.721+0000 7fa0d75e7140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:47:34 compute-0 ceph-mgr[74711]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:47:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'influx'
Dec 05 09:47:34 compute-0 podman[91601]: 2025-12-05 09:47:34.728357997 +0000 UTC m=+1.695502718 container start d3e3d3ad8f4e1f8ff72f2adf9f1fd85e348a03a11c7deba56af90abe2ad2c3aa (image=quay.io/ceph/ceph:v19, name=relaxed_williamson, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:47:34 compute-0 podman[91601]: 2025-12-05 09:47:34.732527834 +0000 UTC m=+1.699672575 container attach d3e3d3ad8f4e1f8ff72f2adf9f1fd85e348a03a11c7deba56af90abe2ad2c3aa (image=quay.io/ceph/ceph:v19, name=relaxed_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:34.798+0000 7fa0d75e7140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:47:34 compute-0 ceph-mgr[74711]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:47:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'insights'
Dec 05 09:47:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'iostat'
Dec 05 09:47:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:34.952+0000 7fa0d75e7140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:47:34 compute-0 ceph-mgr[74711]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:47:34 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'k8sevents'
Dec 05 09:47:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec 05 09:47:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2676205759' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 05 09:47:35 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'localpool'
Dec 05 09:47:35 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mds_autoscaler'
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 7.a scrub starts
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 7.a scrub ok
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 3.13 scrub starts
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 3.13 scrub ok
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 3.18 deep-scrub starts
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 3.18 deep-scrub ok
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 7.14 scrub starts
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 7.14 scrub ok
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 5.15 scrub starts
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 5.15 scrub ok
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 7.13 deep-scrub starts
Dec 05 09:47:35 compute-0 ceph-mon[74418]: 7.13 deep-scrub ok
Dec 05 09:47:35 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2676205759' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 05 09:47:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2676205759' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 05 09:47:35 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.hvnxai(active, since 14s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:35 compute-0 systemd[1]: libpod-d3e3d3ad8f4e1f8ff72f2adf9f1fd85e348a03a11c7deba56af90abe2ad2c3aa.scope: Deactivated successfully.
Dec 05 09:47:35 compute-0 podman[91601]: 2025-12-05 09:47:35.605643298 +0000 UTC m=+2.572788019 container died d3e3d3ad8f4e1f8ff72f2adf9f1fd85e348a03a11c7deba56af90abe2ad2c3aa (image=quay.io/ceph/ceph:v19, name=relaxed_williamson, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b0e24fa37e820ad01f7201d0f8a19f547fb1be8514afef8d856fa356c21e82d-merged.mount: Deactivated successfully.
Dec 05 09:47:35 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 05 09:47:35 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mirroring'
Dec 05 09:47:35 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 05 09:47:35 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'nfs'
Dec 05 09:47:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:36.018+0000 7fa0d75e7140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'orchestrator'
Dec 05 09:47:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:36.244+0000 7fa0d75e7140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_perf_query'
Dec 05 09:47:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:36.323+0000 7fa0d75e7140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_support'
Dec 05 09:47:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:36.389+0000 7fa0d75e7140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'pg_autoscaler'
Dec 05 09:47:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:36.465+0000 7fa0d75e7140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'progress'
Dec 05 09:47:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:36.534+0000 7fa0d75e7140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'prometheus'
Dec 05 09:47:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:47:36 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 05 09:47:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:36.901+0000 7fa0d75e7140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:47:36 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rbd_support'
Dec 05 09:47:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:37.000+0000 7fa0d75e7140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:47:37 compute-0 ceph-mgr[74711]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:47:37 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'restful'
Dec 05 09:47:37 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rgw'
Dec 05 09:47:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:37.450+0000 7fa0d75e7140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:47:37 compute-0 ceph-mgr[74711]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:47:37 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rook'
Dec 05 09:47:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:38.044+0000 7fa0d75e7140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'selftest'
Dec 05 09:47:38 compute-0 ceph-mon[74418]: 2.f deep-scrub starts
Dec 05 09:47:38 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 05 09:47:38 compute-0 ceph-mon[74418]: 2.f deep-scrub ok
Dec 05 09:47:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2676205759' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 05 09:47:38 compute-0 ceph-mon[74418]: mgrmap e18: compute-0.hvnxai(active, since 14s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:38 compute-0 ceph-mon[74418]: 3.10 scrub starts
Dec 05 09:47:38 compute-0 ceph-mon[74418]: 3.10 scrub ok
Dec 05 09:47:38 compute-0 ceph-mon[74418]: 7.10 scrub starts
Dec 05 09:47:38 compute-0 ceph-mon[74418]: 7.10 scrub ok
Dec 05 09:47:38 compute-0 ceph-mon[74418]: 2.13 scrub starts
Dec 05 09:47:38 compute-0 ceph-mon[74418]: 2.13 scrub ok
Dec 05 09:47:38 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 05 09:47:38 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 05 09:47:38 compute-0 podman[91601]: 2025-12-05 09:47:38.112027166 +0000 UTC m=+5.079171887 container remove d3e3d3ad8f4e1f8ff72f2adf9f1fd85e348a03a11c7deba56af90abe2ad2c3aa (image=quay.io/ceph/ceph:v19, name=relaxed_williamson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:47:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:38.128+0000 7fa0d75e7140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'snap_schedule'
Dec 05 09:47:38 compute-0 sudo[91598]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:38 compute-0 systemd[1]: libpod-conmon-d3e3d3ad8f4e1f8ff72f2adf9f1fd85e348a03a11c7deba56af90abe2ad2c3aa.scope: Deactivated successfully.
Dec 05 09:47:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:38.219+0000 7fa0d75e7140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'stats'
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'status'
Dec 05 09:47:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:38.379+0000 7fa0d75e7140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telegraf'
Dec 05 09:47:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:38.455+0000 7fa0d75e7140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telemetry'
Dec 05 09:47:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:38.638+0000 7fa0d75e7140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'test_orchestrator'
Dec 05 09:47:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:38.887+0000 7fa0d75e7140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:38 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'volumes'
Dec 05 09:47:39 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 05 09:47:39 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 05 09:47:39 compute-0 python3[91740]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:47:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:39.204+0000 7fa0d75e7140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'zabbix'
Dec 05 09:47:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:39.273+0000 7fa0d75e7140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:47:39 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Active manager daemon compute-0.hvnxai restarted
Dec 05 09:47:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 05 09:47:39 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hvnxai
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: ms_deliver_dispatch: unhandled message 0x5578c15ad860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  1: '-n'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  2: 'mgr.compute-0.hvnxai'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  3: '-f'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  4: '--setuser'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  5: 'ceph'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  6: '--setgroup'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  7: 'ceph'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  8: '--default-log-to-file=false'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  9: '--default-log-to-journald=true'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr respawn  exe_path /proc/self/exe
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 5.17 scrub starts
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 5.9 scrub starts
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 5.9 scrub ok
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 2.12 deep-scrub starts
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 2.12 deep-scrub ok
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 5.16 scrub starts
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 5.16 scrub ok
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 5.1e scrub starts
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 5.17 scrub ok
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 5.1e scrub ok
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 7.16 deep-scrub starts
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 7.16 deep-scrub ok
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 6.a scrub starts
Dec 05 09:47:39 compute-0 ceph-mon[74418]: 6.a scrub ok
Dec 05 09:47:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ignoring --setuser ceph since I am not root
Dec 05 09:47:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ignoring --setgroup ceph since I am not root
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: pidfile_write: ignore empty --pid-file
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'alerts'
Dec 05 09:47:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:39.513+0000 7faef6549140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'balancer'
Dec 05 09:47:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:39.596+0000 7faef6549140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:47:39 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'cephadm'
Dec 05 09:47:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 05 09:47:39 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 05 09:47:39 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.hvnxai(active, starting, since 0.687951s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:39 compute-0 python3[91811]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764928058.730901-37387-216361966177298/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:40 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 05 09:47:40 compute-0 sudo[91890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqlnthiessvtxtybpiiujzeabewiyckc ; /usr/bin/python3'
Dec 05 09:47:40 compute-0 sudo[91890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:40 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'crash'
Dec 05 09:47:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:40.440+0000 7faef6549140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:47:40 compute-0 ceph-mgr[74711]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:47:40 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'dashboard'
Dec 05 09:47:40 compute-0 python3[91892]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:40 compute-0 podman[91893]: 2025-12-05 09:47:40.555531361 +0000 UTC m=+0.023704438 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:41 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'devicehealth'
Dec 05 09:47:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:41.105+0000 7faef6549140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:47:41 compute-0 ceph-mgr[74711]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:47:41 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'diskprediction_local'
Dec 05 09:47:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 05 09:47:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 05 09:47:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   from numpy import show_config as show_numpy_config
Dec 05 09:47:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:41.271+0000 7faef6549140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:47:41 compute-0 ceph-mgr[74711]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:47:41 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'influx'
Dec 05 09:47:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:41.345+0000 7faef6549140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:47:41 compute-0 ceph-mgr[74711]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:47:41 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'insights'
Dec 05 09:47:41 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'iostat'
Dec 05 09:47:41 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 05 09:47:41 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 05 09:47:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:41.499+0000 7faef6549140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:47:41 compute-0 ceph-mgr[74711]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:47:41 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'k8sevents'
Dec 05 09:47:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:47:41 compute-0 podman[91893]: 2025-12-05 09:47:41.554777469 +0000 UTC m=+1.022950526 container create b3af8e4e45dce5cbe766ae85b7ced476b7979dda09dca0293a0b0593e707256a (image=quay.io/ceph/ceph:v19, name=lucid_feistel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 05 09:47:41 compute-0 ceph-mon[74418]: 2.19 scrub starts
Dec 05 09:47:41 compute-0 ceph-mon[74418]: 2.19 scrub ok
Dec 05 09:47:41 compute-0 ceph-mon[74418]: 2.10 scrub starts
Dec 05 09:47:41 compute-0 ceph-mon[74418]: 2.10 scrub ok
Dec 05 09:47:41 compute-0 ceph-mon[74418]: Active manager daemon compute-0.hvnxai restarted
Dec 05 09:47:41 compute-0 ceph-mon[74418]: Activating manager daemon compute-0.hvnxai
Dec 05 09:47:41 compute-0 ceph-mon[74418]: 3.f scrub starts
Dec 05 09:47:41 compute-0 ceph-mon[74418]: 3.f scrub ok
Dec 05 09:47:41 compute-0 ceph-mon[74418]: osdmap e47: 3 total, 3 up, 3 in
Dec 05 09:47:41 compute-0 ceph-mon[74418]: mgrmap e19: compute-0.hvnxai(active, starting, since 0.687951s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:41 compute-0 systemd[1]: Started libpod-conmon-b3af8e4e45dce5cbe766ae85b7ced476b7979dda09dca0293a0b0593e707256a.scope.
Dec 05 09:47:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bb119973027e47df23552d9884aeb2cc51e040007ed4274db44dbf7ab5bbe64/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bb119973027e47df23552d9884aeb2cc51e040007ed4274db44dbf7ab5bbe64/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bb119973027e47df23552d9884aeb2cc51e040007ed4274db44dbf7ab5bbe64/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:41 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 05 09:47:41 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'localpool'
Dec 05 09:47:42 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mds_autoscaler'
Dec 05 09:47:42 compute-0 podman[91893]: 2025-12-05 09:47:42.047895085 +0000 UTC m=+1.516068152 container init b3af8e4e45dce5cbe766ae85b7ced476b7979dda09dca0293a0b0593e707256a (image=quay.io/ceph/ceph:v19, name=lucid_feistel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:47:42 compute-0 podman[91893]: 2025-12-05 09:47:42.055746537 +0000 UTC m=+1.523919594 container start b3af8e4e45dce5cbe766ae85b7ced476b7979dda09dca0293a0b0593e707256a (image=quay.io/ceph/ceph:v19, name=lucid_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:42 compute-0 podman[91893]: 2025-12-05 09:47:42.087717017 +0000 UTC m=+1.555890074 container attach b3af8e4e45dce5cbe766ae85b7ced476b7979dda09dca0293a0b0593e707256a (image=quay.io/ceph/ceph:v19, name=lucid_feistel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:47:42 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mirroring'
Dec 05 09:47:42 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'nfs'
Dec 05 09:47:42 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 05 09:47:42 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 05 09:47:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:42.645+0000 7faef6549140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:47:42 compute-0 ceph-mgr[74711]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:47:42 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'orchestrator'
Dec 05 09:47:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:42.883+0000 7faef6549140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:42 compute-0 ceph-mgr[74711]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:42 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_perf_query'
Dec 05 09:47:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:42.969+0000 7faef6549140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:47:42 compute-0 ceph-mgr[74711]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:47:42 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_support'
Dec 05 09:47:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:43.033+0000 7faef6549140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:47:43 compute-0 ceph-mgr[74711]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:47:43 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'pg_autoscaler'
Dec 05 09:47:43 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 05 09:47:43 compute-0 systemd[75697]: Activating special unit Exit the Session...
Dec 05 09:47:43 compute-0 systemd[75697]: Stopped target Main User Target.
Dec 05 09:47:43 compute-0 systemd[75697]: Stopped target Basic System.
Dec 05 09:47:43 compute-0 systemd[75697]: Stopped target Paths.
Dec 05 09:47:43 compute-0 systemd[75697]: Stopped target Sockets.
Dec 05 09:47:43 compute-0 systemd[75697]: Stopped target Timers.
Dec 05 09:47:43 compute-0 systemd[75697]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:47:43 compute-0 systemd[75697]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 09:47:43 compute-0 systemd[75697]: Closed D-Bus User Message Bus Socket.
Dec 05 09:47:43 compute-0 systemd[75697]: Stopped Create User's Volatile Files and Directories.
Dec 05 09:47:43 compute-0 systemd[75697]: Removed slice User Application Slice.
Dec 05 09:47:43 compute-0 systemd[75697]: Reached target Shutdown.
Dec 05 09:47:43 compute-0 systemd[75697]: Finished Exit the Session.
Dec 05 09:47:43 compute-0 systemd[75697]: Reached target Exit the Session.
Dec 05 09:47:43 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 05 09:47:43 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 05 09:47:43 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 05 09:47:43 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 05 09:47:43 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 05 09:47:43 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 05 09:47:43 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 05 09:47:43 compute-0 systemd[1]: user-42477.slice: Consumed 32.469s CPU time.
Dec 05 09:47:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:43.115+0000 7faef6549140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:47:43 compute-0 ceph-mgr[74711]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:47:43 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'progress'
Dec 05 09:47:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:43.186+0000 7faef6549140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:47:43 compute-0 ceph-mgr[74711]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:47:43 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'prometheus'
Dec 05 09:47:43 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 05 09:47:43 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 05 09:47:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:43.545+0000 7faef6549140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:47:43 compute-0 ceph-mgr[74711]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:47:43 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rbd_support'
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 7.b scrub starts
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 2.15 scrub starts
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 2.15 scrub ok
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 3.c scrub starts
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 3.c scrub ok
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 7.11 scrub starts
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 7.11 scrub ok
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 7.b scrub ok
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 5.a scrub starts
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 4.a scrub starts
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 4.a scrub ok
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 5.a scrub ok
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 6.12 scrub starts
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 6.12 scrub ok
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 3.19 scrub starts
Dec 05 09:47:43 compute-0 ceph-mon[74418]: 3.19 scrub ok
Dec 05 09:47:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:43.654+0000 7faef6549140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:47:43 compute-0 ceph-mgr[74711]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:47:43 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'restful'
Dec 05 09:47:43 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rgw'
Dec 05 09:47:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:44.137+0000 7faef6549140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:47:44 compute-0 ceph-mgr[74711]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:47:44 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rook'
Dec 05 09:47:44 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 05 09:47:44 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 05 09:47:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:44.751+0000 7faef6549140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:47:44 compute-0 ceph-mgr[74711]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:47:44 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'selftest'
Dec 05 09:47:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:44.833+0000 7faef6549140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:47:44 compute-0 ceph-mgr[74711]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:47:44 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'snap_schedule'
Dec 05 09:47:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:44.923+0000 7faef6549140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:47:44 compute-0 ceph-mgr[74711]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:47:44 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'stats'
Dec 05 09:47:45 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'status'
Dec 05 09:47:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:45.093+0000 7faef6549140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:47:45 compute-0 ceph-mgr[74711]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:47:45 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telegraf'
Dec 05 09:47:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:45.176+0000 7faef6549140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:47:45 compute-0 ceph-mgr[74711]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:47:45 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telemetry'
Dec 05 09:47:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:45.358+0000 7faef6549140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:47:45 compute-0 ceph-mgr[74711]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:47:45 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'test_orchestrator'
Dec 05 09:47:45 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 05 09:47:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:45.596+0000 7faef6549140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:45 compute-0 ceph-mgr[74711]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:47:45 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'volumes'
Dec 05 09:47:45 compute-0 ceph-mon[74418]: 6.8 scrub starts
Dec 05 09:47:45 compute-0 ceph-mon[74418]: 6.8 scrub ok
Dec 05 09:47:45 compute-0 ceph-mon[74418]: 2.d scrub starts
Dec 05 09:47:45 compute-0 ceph-mon[74418]: 2.d scrub ok
Dec 05 09:47:45 compute-0 ceph-mon[74418]: 2.e scrub starts
Dec 05 09:47:45 compute-0 ceph-mon[74418]: 2.e scrub ok
Dec 05 09:47:45 compute-0 ceph-mon[74418]: 3.a scrub starts
Dec 05 09:47:45 compute-0 ceph-mon[74418]: 3.a scrub ok
Dec 05 09:47:45 compute-0 ceph-mon[74418]: 7.8 scrub starts
Dec 05 09:47:45 compute-0 ceph-mon[74418]: 7.8 scrub ok
Dec 05 09:47:45 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 05 09:47:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:45.935+0000 7faef6549140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:47:45 compute-0 ceph-mgr[74711]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:47:45 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'zabbix'
Dec 05 09:47:45 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.wewrgp restarted
Dec 05 09:47:45 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.wewrgp started
Dec 05 09:47:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:47:46.019+0000 7faef6549140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Active manager daemon compute-0.hvnxai restarted
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hvnxai
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: ms_deliver_dispatch: unhandled message 0x55d2ae537860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr handle_mgr_map Activating!
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr handle_mgr_map I am now activating
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.hvnxai(active, starting, since 0.0410115s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e1 all = 1
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: balancer
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Manager daemon compute-0.hvnxai is now available
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [balancer INFO root] Starting
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:47:46
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: cephadm
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: crash
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: dashboard
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO sso] Loading SSO DB version=1
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: devicehealth
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Starting
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: iostat
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: nfs
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: orchestrator
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: pg_autoscaler
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: progress
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [progress INFO root] Loading...
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fae76055100>, <progress.module.GhostEvent object at 0x7fae76055370>, <progress.module.GhostEvent object at 0x7fae76055430>, <progress.module.GhostEvent object at 0x7fae760553a0>, <progress.module.GhostEvent object at 0x7fae760553d0>, <progress.module.GhostEvent object at 0x7fae760552b0>, <progress.module.GhostEvent object at 0x7fae76055250>, <progress.module.GhostEvent object at 0x7fae76055400>, <progress.module.GhostEvent object at 0x7fae76055460>, <progress.module.GhostEvent object at 0x7fae76055490>, <progress.module.GhostEvent object at 0x7fae760554c0>, <progress.module.GhostEvent object at 0x7fae760554f0>] historic events
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [progress INFO root] Loaded OSDMap, ready.
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] recovery thread starting
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] starting setup
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: rbd_support
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: restful
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: status
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [restful INFO root] server_addr: :: server_port: 8003
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [restful WARNING root] server not running: no certificate configured
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: telemetry
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] PerfHandler: starting
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TaskHandler: starting
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"} v 0)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"}]: dispatch
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: volumes
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] setup complete
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 05 09:47:46 compute-0 sshd-session[92050]: Accepted publickey for ceph-admin from 192.168.122.100 port 56438 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 05 09:47:46 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 05 09:47:46 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 05 09:47:46 compute-0 systemd-logind[789]: New session 35 of user ceph-admin.
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 05 09:47:46 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 05 09:47:46 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 05 09:47:46 compute-0 systemd[92054]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:47:46 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 05 09:47:46 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 05 09:47:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:47:46 compute-0 systemd[92054]: Queued start job for default target Main User Target.
Dec 05 09:47:46 compute-0 systemd[92054]: Created slice User Application Slice.
Dec 05 09:47:46 compute-0 systemd[92054]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:47:46 compute-0 systemd[92054]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 09:47:46 compute-0 systemd[92054]: Reached target Paths.
Dec 05 09:47:46 compute-0 systemd[92054]: Reached target Timers.
Dec 05 09:47:46 compute-0 systemd[92054]: Starting D-Bus User Message Bus Socket...
Dec 05 09:47:46 compute-0 systemd[92054]: Starting Create User's Volatile Files and Directories...
Dec 05 09:47:46 compute-0 systemd[92054]: Listening on D-Bus User Message Bus Socket.
Dec 05 09:47:46 compute-0 systemd[92054]: Reached target Sockets.
Dec 05 09:47:46 compute-0 systemd[92054]: Finished Create User's Volatile Files and Directories.
Dec 05 09:47:46 compute-0 systemd[92054]: Reached target Basic System.
Dec 05 09:47:46 compute-0 systemd[92054]: Reached target Main User Target.
Dec 05 09:47:46 compute-0 systemd[92054]: Startup finished in 141ms.
Dec 05 09:47:46 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 05 09:47:46 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Dec 05 09:47:46 compute-0 sshd-session[92050]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:47:46 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.module] Engine started.
Dec 05 09:47:46 compute-0 sudo[92081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:47:46 compute-0 sudo[92081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:46 compute-0 sudo[92081]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:46 compute-0 sudo[92107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 09:47:46 compute-0 sudo[92107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unhddt restarted
Dec 05 09:47:46 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unhddt started
Dec 05 09:47:47 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:47:47] ENGINE Bus STARTING
Dec 05 09:47:47 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:47:47] ENGINE Bus STARTING
Dec 05 09:47:47 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:47:47] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:47:47 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:47:47] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 6.17 deep-scrub starts
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 6.17 deep-scrub ok
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 3.d scrub starts
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 3.d scrub ok
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 2.5 scrub starts
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 2.5 scrub ok
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 5.c scrub starts
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 5.c scrub ok
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 4.5 scrub starts
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 4.5 scrub ok
Dec 05 09:47:47 compute-0 ceph-mon[74418]: Standby manager daemon compute-2.wewrgp restarted
Dec 05 09:47:47 compute-0 ceph-mon[74418]: Standby manager daemon compute-2.wewrgp started
Dec 05 09:47:47 compute-0 ceph-mon[74418]: Active manager daemon compute-0.hvnxai restarted
Dec 05 09:47:47 compute-0 ceph-mon[74418]: Activating manager daemon compute-0.hvnxai
Dec 05 09:47:47 compute-0 ceph-mon[74418]: osdmap e48: 3 total, 3 up, 3 in
Dec 05 09:47:47 compute-0 ceph-mon[74418]: mgrmap e20: compute-0.hvnxai(active, starting, since 0.0410115s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: Manager daemon compute-0.hvnxai is now available
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 4.1 scrub starts
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 4.1 scrub ok
Dec 05 09:47:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"}]: dispatch
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 3.b scrub starts
Dec 05 09:47:47 compute-0 ceph-mon[74418]: 3.b scrub ok
Dec 05 09:47:47 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 05 09:47:47 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 05 09:47:47 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:47:47] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:47:47 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:47:47] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:47:47 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:47:47] ENGINE Bus STARTED
Dec 05 09:47:47 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:47:47] ENGINE Bus STARTED
Dec 05 09:47:47 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:47:47] ENGINE Client ('192.168.122.100', 43368) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:47:47 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:47:47] ENGINE Client ('192.168.122.100', 43368) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:47:47 compute-0 podman[92201]: 2025-12-05 09:47:47.766250517 +0000 UTC m=+0.507735837 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:47 compute-0 podman[92201]: 2025-12-05 09:47:47.86667923 +0000 UTC m=+0.608164520 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:47:48 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:47:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:47:48 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Dec 05 09:47:48 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Dec 05 09:47:49 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 05 09:47:49 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 05 09:47:50 compute-0 ceph-mgr[74711]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 09:47:50 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14451 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:50 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.hvnxai(active, since 4s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:50 compute-0 ceph-mgr[74711]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 05 09:47:50 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 05 09:47:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 1 active+clean+scrubbing, 195 active+clean, 1 active+clean+scrubbing+deep; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:51 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 05 09:47:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 05 09:47:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 05 09:47:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 05 09:47:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 05 09:47:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 05 09:47:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 05 09:47:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 05 09:47:51 compute-0 ceph-mon[74418]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 05 09:47:51 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 05 09:47:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0[74414]: 2025-12-05T09:47:51.447+0000 7f838e730640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 05 09:47:51 compute-0 ceph-mon[74418]: 6.7 scrub starts
Dec 05 09:47:51 compute-0 ceph-mon[74418]: 6.7 scrub ok
Dec 05 09:47:51 compute-0 ceph-mon[74418]: Standby manager daemon compute-1.unhddt restarted
Dec 05 09:47:51 compute-0 ceph-mon[74418]: Standby manager daemon compute-1.unhddt started
Dec 05 09:47:51 compute-0 ceph-mon[74418]: 6.1e scrub starts
Dec 05 09:47:51 compute-0 ceph-mon[74418]: 6.1e scrub ok
Dec 05 09:47:51 compute-0 ceph-mon[74418]: [05/Dec/2025:09:47:47] ENGINE Bus STARTING
Dec 05 09:47:51 compute-0 ceph-mon[74418]: [05/Dec/2025:09:47:47] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:47:51 compute-0 ceph-mon[74418]: [05/Dec/2025:09:47:47] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:47:51 compute-0 ceph-mon[74418]: [05/Dec/2025:09:47:47] ENGINE Bus STARTED
Dec 05 09:47:51 compute-0 ceph-mon[74418]: [05/Dec/2025:09:47:47] ENGINE Client ('192.168.122.100', 43368) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:47:51 compute-0 ceph-mon[74418]: 5.7 scrub starts
Dec 05 09:47:51 compute-0 ceph-mon[74418]: 5.7 scrub ok
Dec 05 09:47:51 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 05 09:47:51 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 05 09:47:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:47:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 05 09:47:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e2 new map
Dec 05 09:47:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-12-05T09:47:51:448980+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-05T09:47:51.448919+0000
                                           modified        2025-12-05T09:47:51.448919+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 05 09:47:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 05 09:47:51 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.hvnxai(active, since 5s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:51 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 05 09:47:51 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 05 09:47:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 1 active+clean+scrubbing, 195 active+clean, 1 active+clean+scrubbing+deep; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:52 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 05 09:47:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:47:52 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:47:52 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:47:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 05 09:47:53 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 05 09:47:53 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 05 09:47:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 1 active+clean+scrubbing, 195 active+clean, 1 active+clean+scrubbing+deep; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:54 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Check health
Dec 05 09:47:55 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 05 09:47:55 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 05 09:47:55 compute-0 podman[92348]: 2025-12-05 09:47:55.126505546 +0000 UTC m=+3.305866327 container exec dc2521f476ac6cd8b02d9a95c2d20034aa296ae30c8ddb7ef7e3087931bef2ec (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:47:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v7: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 12 op/s
Dec 05 09:47:56 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 05 09:47:56 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 05 09:47:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:47:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:47:56 compute-0 podman[92348]: 2025-12-05 09:47:56.982715056 +0000 UTC m=+5.162075857 container exec_died dc2521f476ac6cd8b02d9a95c2d20034aa296ae30c8ddb7ef7e3087931bef2ec (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:47:57 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 7.e scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 7.e scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 4.1d scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 4.1d scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 5.6 deep-scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 5.6 deep-scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 6.5 scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 6.5 scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 7.1f scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 7.1f scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 3.6 scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 3.6 scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 5.2 scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: from='client.14451 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:57 compute-0 ceph-mon[74418]: mgrmap e21: compute-0.hvnxai(active, since 4s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 4.19 scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 4.19 scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 5.3 scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: pgmap v3: 197 pgs: 1 active+clean+scrubbing, 195 active+clean, 1 active+clean+scrubbing+deep; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 5.2 scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 3.3 deep-scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 6.1b scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 6.1b scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 5.3 scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 05 09:47:57 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 05 09:47:57 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 05 09:47:57 compute-0 ceph-mon[74418]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 05 09:47:57 compute-0 ceph-mon[74418]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 3.3 deep-scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 3.7 scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 3.7 scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:57 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 05 09:47:57 compute-0 ceph-mon[74418]: mgrmap e22: compute-0.hvnxai(active, since 5s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:57 compute-0 ceph-mon[74418]: osdmap e49: 3 total, 3 up, 3 in
Dec 05 09:47:57 compute-0 ceph-mon[74418]: fsmap cephfs:0
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 3.5 scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: pgmap v5: 197 pgs: 1 active+clean+scrubbing, 195 active+clean, 1 active+clean+scrubbing+deep; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 4.1c deep-scrub starts
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 4.1c deep-scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: 3.5 scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:57 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.hvnxai(active, since 11s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:47:57 compute-0 sudo[92107]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:57 compute-0 ceph-mgr[74711]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 05 09:47:57 compute-0 systemd[1]: libpod-b3af8e4e45dce5cbe766ae85b7ced476b7979dda09dca0293a0b0593e707256a.scope: Deactivated successfully.
Dec 05 09:47:57 compute-0 podman[91893]: 2025-12-05 09:47:57.741569385 +0000 UTC m=+17.209742452 container died b3af8e4e45dce5cbe766ae85b7ced476b7979dda09dca0293a0b0593e707256a (image=quay.io/ceph/ceph:v19, name=lucid_feistel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 09:47:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:47:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:47:57 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 05 09:47:57 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 05 09:47:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bb119973027e47df23552d9884aeb2cc51e040007ed4274db44dbf7ab5bbe64-merged.mount: Deactivated successfully.
Dec 05 09:47:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:47:57 compute-0 podman[91893]: 2025-12-05 09:47:57.858409374 +0000 UTC m=+17.326582441 container remove b3af8e4e45dce5cbe766ae85b7ced476b7979dda09dca0293a0b0593e707256a (image=quay.io/ceph/ceph:v19, name=lucid_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 05 09:47:57 compute-0 systemd[1]: libpod-conmon-b3af8e4e45dce5cbe766ae85b7ced476b7979dda09dca0293a0b0593e707256a.scope: Deactivated successfully.
Dec 05 09:47:57 compute-0 sudo[91890]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 05 09:47:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 sudo[92407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:47:58 compute-0 sudo[92407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:58 compute-0 sudo[92407]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:58 compute-0 sudo[92454]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsziaasuqqkgzusjqkphcevkthhkjwhj ; /usr/bin/python3'
Dec 05 09:47:58 compute-0 sudo[92454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:58 compute-0 sudo[92457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:47:58 compute-0 sudo[92457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v8: 197 pgs: 2 active+clean+scrubbing, 195 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 8 op/s
Dec 05 09:47:58 compute-0 python3[92458]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:58 compute-0 podman[92483]: 2025-12-05 09:47:58.254993545 +0000 UTC m=+0.055705807 container create 21c765035944408dca6690a8165c26dfdebe9aacde7f01e0e00a672a04ccf5c9 (image=quay.io/ceph/ceph:v19, name=hungry_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:47:58 compute-0 podman[92483]: 2025-12-05 09:47:58.23420149 +0000 UTC m=+0.034913772 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:58 compute-0 systemd[1]: Started libpod-conmon-21c765035944408dca6690a8165c26dfdebe9aacde7f01e0e00a672a04ccf5c9.scope.
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 7.9 scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 5.1 scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 7.9 scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 2.18 scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 2.18 scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 5.14 scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 5.1 scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 5.f scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: pgmap v6: 197 pgs: 1 active+clean+scrubbing, 195 active+clean, 1 active+clean+scrubbing+deep; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 4.3 scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 4.3 scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 5.f scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 5.14 scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 3.1 scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 4.e deep-scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 4.e deep-scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 4.2 scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 4.2 scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 4.c scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 4.c scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: pgmap v7: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 12 op/s
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 4.6 scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 4.6 scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 3.1 scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 3.4 scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 5.1b scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 3.4 scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 6.1 scrub starts
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 6.1 scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: 5.1b scrub ok
Dec 05 09:47:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 ceph-mon[74418]: mgrmap e23: compute-0.hvnxai(active, since 11s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:47:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c82234b381a98e02358e58279f4e80318c3e7f38bc97ef54aa53a0da5b3a14/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c82234b381a98e02358e58279f4e80318c3e7f38bc97ef54aa53a0da5b3a14/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c82234b381a98e02358e58279f4e80318c3e7f38bc97ef54aa53a0da5b3a14/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:58 compute-0 podman[92483]: 2025-12-05 09:47:58.423315015 +0000 UTC m=+0.224027307 container init 21c765035944408dca6690a8165c26dfdebe9aacde7f01e0e00a672a04ccf5c9 (image=quay.io/ceph/ceph:v19, name=hungry_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 09:47:58 compute-0 podman[92483]: 2025-12-05 09:47:58.430059249 +0000 UTC m=+0.230771511 container start 21c765035944408dca6690a8165c26dfdebe9aacde7f01e0e00a672a04ccf5c9 (image=quay.io/ceph/ceph:v19, name=hungry_hodgkin, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:47:58 compute-0 podman[92483]: 2025-12-05 09:47:58.472412022 +0000 UTC m=+0.273124294 container attach 21c765035944408dca6690a8165c26dfdebe9aacde7f01e0e00a672a04ccf5c9 (image=quay.io/ceph/ceph:v19, name=hungry_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:47:58 compute-0 sudo[92457]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:58 compute-0 sudo[92552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:47:58 compute-0 sudo[92552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:58 compute-0 sudo[92552]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:58 compute-0 sudo[92577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 05 09:47:58 compute-0 sudo[92577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:58 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 05 09:47:58 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 05 09:47:58 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:58 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:47:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:47:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 05 09:47:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 hungry_hodgkin[92512]: Scheduled mds.cephfs update...
Dec 05 09:47:58 compute-0 systemd[1]: libpod-21c765035944408dca6690a8165c26dfdebe9aacde7f01e0e00a672a04ccf5c9.scope: Deactivated successfully.
Dec 05 09:47:58 compute-0 podman[92483]: 2025-12-05 09:47:58.880657231 +0000 UTC m=+0.681369503 container died 21c765035944408dca6690a8165c26dfdebe9aacde7f01e0e00a672a04ccf5c9 (image=quay.io/ceph/ceph:v19, name=hungry_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 09:47:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:47:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-58c82234b381a98e02358e58279f4e80318c3e7f38bc97ef54aa53a0da5b3a14-merged.mount: Deactivated successfully.
Dec 05 09:47:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 05 09:47:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:58 compute-0 podman[92483]: 2025-12-05 09:47:58.992249396 +0000 UTC m=+0.792961658 container remove 21c765035944408dca6690a8165c26dfdebe9aacde7f01e0e00a672a04ccf5c9 (image=quay.io/ceph/ceph:v19, name=hungry_hodgkin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:47:58 compute-0 systemd[1]: libpod-conmon-21c765035944408dca6690a8165c26dfdebe9aacde7f01e0e00a672a04ccf5c9.scope: Deactivated successfully.
Dec 05 09:47:59 compute-0 sudo[92577]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 sudo[92454]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:47:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:47:59 compute-0 sudo[92656]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uouplyfneifpnmngraveqmenuvxpfspq ; /usr/bin/python3'
Dec 05 09:47:59 compute-0 sudo[92656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 05 09:47:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:47:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:47:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:47:59 compute-0 sudo[92659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:47:59 compute-0 sudo[92659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92659]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 sudo[92684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph
Dec 05 09:47:59 compute-0 sudo[92684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92684]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 python3[92658]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:47:59 compute-0 ceph-mon[74418]: 5.5 scrub starts
Dec 05 09:47:59 compute-0 ceph-mon[74418]: 5.5 scrub ok
Dec 05 09:47:59 compute-0 ceph-mon[74418]: 5.1c scrub starts
Dec 05 09:47:59 compute-0 ceph-mon[74418]: 5.1c scrub ok
Dec 05 09:47:59 compute-0 ceph-mon[74418]: pgmap v8: 197 pgs: 2 active+clean+scrubbing, 195 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 8 op/s
Dec 05 09:47:59 compute-0 ceph-mon[74418]: 6.1c scrub starts
Dec 05 09:47:59 compute-0 ceph-mon[74418]: 6.1c scrub ok
Dec 05 09:47:59 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:59 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:59 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:59 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:59 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:59 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:47:59 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 09:47:59 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:47:59 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:47:59 compute-0 sudo[92709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:59 compute-0 sudo[92709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92709]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 podman[92720]: 2025-12-05 09:47:59.404076413 +0000 UTC m=+0.042450496 container create 54aea11083071c8c3ecb79d4e6fc1b498278de2b2d3301030a4a9e533c2b3702 (image=quay.io/ceph/ceph:v19, name=hopeful_beaver, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 09:47:59 compute-0 systemd[1]: Started libpod-conmon-54aea11083071c8c3ecb79d4e6fc1b498278de2b2d3301030a4a9e533c2b3702.scope.
Dec 05 09:47:59 compute-0 sudo[92746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:47:59 compute-0 sudo[92746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92746]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 podman[92720]: 2025-12-05 09:47:59.386013562 +0000 UTC m=+0.024387655 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:47:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e5124af5cf3dce4990a524d3a82619889e172131e264d9c76f58eccd3c5d13/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e5124af5cf3dce4990a524d3a82619889e172131e264d9c76f58eccd3c5d13/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e5124af5cf3dce4990a524d3a82619889e172131e264d9c76f58eccd3c5d13/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:47:59 compute-0 podman[92720]: 2025-12-05 09:47:59.503438647 +0000 UTC m=+0.141812750 container init 54aea11083071c8c3ecb79d4e6fc1b498278de2b2d3301030a4a9e533c2b3702 (image=quay.io/ceph/ceph:v19, name=hopeful_beaver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:59 compute-0 podman[92720]: 2025-12-05 09:47:59.511452745 +0000 UTC m=+0.149826828 container start 54aea11083071c8c3ecb79d4e6fc1b498278de2b2d3301030a4a9e533c2b3702 (image=quay.io/ceph/ceph:v19, name=hopeful_beaver, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:47:59 compute-0 podman[92720]: 2025-12-05 09:47:59.516365398 +0000 UTC m=+0.154739511 container attach 54aea11083071c8c3ecb79d4e6fc1b498278de2b2d3301030a4a9e533c2b3702 (image=quay.io/ceph/ceph:v19, name=hopeful_beaver, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 09:47:59 compute-0 sudo[92776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:59 compute-0 sudo[92776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92776]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 sudo[92825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:59 compute-0 sudo[92825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92825]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 sudo[92869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:47:59 compute-0 sudo[92869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92869]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 sudo[92894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:47:59 compute-0 sudo[92894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92894]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:59 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 05 09:47:59 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:59 compute-0 sudo[92919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:47:59 compute-0 sudo[92919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92919]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 sudo[92944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:47:59 compute-0 sudo[92944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92944]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14484 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:47:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Dec 05 09:47:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 05 09:47:59 compute-0 sudo[92969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:47:59 compute-0 sudo[92969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92969]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 sudo[92997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:47:59 compute-0 sudo[92997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:47:59 compute-0 sudo[92997]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:47:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:48:00 compute-0 sudo[93022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:48:00 compute-0 sudo[93022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93022]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v9: 197 pgs: 2 active+clean+scrubbing, 195 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 6 op/s
Dec 05 09:48:00 compute-0 sudo[93070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:48:00 compute-0 sudo[93070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93070]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 sudo[93095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:48:00 compute-0 sudo[93095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93095]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 sudo[93120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:48:00 compute-0 sudo[93120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93120]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:48:00 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:48:00 compute-0 sudo[93145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:48:00 compute-0 sudo[93145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93145]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:48:00 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:48:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 05 09:48:00 compute-0 sudo[93170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph
Dec 05 09:48:00 compute-0 sudo[93170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93170]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 05 09:48:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 05 09:48:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 05 09:48:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Dec 05 09:48:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 05 09:48:00 compute-0 ceph-mon[74418]: 5.1d scrub starts
Dec 05 09:48:00 compute-0 ceph-mon[74418]: 5.1d scrub ok
Dec 05 09:48:00 compute-0 ceph-mon[74418]: from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:48:00 compute-0 ceph-mon[74418]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:48:00 compute-0 ceph-mon[74418]: 4.d scrub starts
Dec 05 09:48:00 compute-0 ceph-mon[74418]: 4.d scrub ok
Dec 05 09:48:00 compute-0 ceph-mon[74418]: Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:48:00 compute-0 ceph-mon[74418]: Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:48:00 compute-0 ceph-mon[74418]: Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:48:00 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 05 09:48:00 compute-0 sudo[93195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:48:00 compute-0 sudo[93195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93195]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 sudo[93220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:48:00 compute-0 sudo[93220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93220]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 sudo[93245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:48:00 compute-0 sudo[93245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93245]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:48:00 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:48:00 compute-0 sudo[93293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:48:00 compute-0 sudo[93293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93293]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 sudo[93318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:48:00 compute-0 sudo[93318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93318]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 sudo[93343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 09:48:00 compute-0 sudo[93343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93343]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:48:00 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:48:00 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 05 09:48:00 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 05 09:48:00 compute-0 sudo[93368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:48:00 compute-0 sudo[93368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93368]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 sudo[93393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:48:00 compute-0 sudo[93393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93393]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 sudo[93418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:48:00 compute-0 sudo[93418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:00 compute-0 sudo[93418]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:48:00 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:48:01 compute-0 sudo[93443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:48:01 compute-0 sudo[93443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:01 compute-0 sudo[93443]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:01 compute-0 sudo[93468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:48:01 compute-0 sudo[93468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:01 compute-0 sudo[93468]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:01 compute-0 sudo[93516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:48:01 compute-0 sudo[93516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:01 compute-0 sudo[93516]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:01 compute-0 sudo[93541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:48:01 compute-0 sudo[93541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:01 compute-0 sudo[93541]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:01 compute-0 sudo[93566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:48:01 compute-0 sudo[93566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:01 compute-0 sudo[93566]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:48:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:48:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:01 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:48:01 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:48:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 05 09:48:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 05 09:48:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 05 09:48:01 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 05 09:48:01 compute-0 ceph-mon[74418]: Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:48:01 compute-0 ceph-mon[74418]: 3.2 scrub starts
Dec 05 09:48:01 compute-0 ceph-mon[74418]: 3.2 scrub ok
Dec 05 09:48:01 compute-0 ceph-mon[74418]: Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:48:01 compute-0 ceph-mon[74418]: from='client.14484 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:48:01 compute-0 ceph-mon[74418]: Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:48:01 compute-0 ceph-mon[74418]: 4.1a scrub starts
Dec 05 09:48:01 compute-0 ceph-mon[74418]: 4.1a scrub ok
Dec 05 09:48:01 compute-0 ceph-mon[74418]: pgmap v9: 197 pgs: 2 active+clean+scrubbing, 195 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 6 op/s
Dec 05 09:48:01 compute-0 ceph-mon[74418]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:48:01 compute-0 ceph-mon[74418]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:48:01 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 05 09:48:01 compute-0 ceph-mon[74418]: osdmap e50: 3 total, 3 up, 3 in
Dec 05 09:48:01 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 05 09:48:01 compute-0 ceph-mon[74418]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:48:01 compute-0 ceph-mon[74418]: Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:48:01 compute-0 ceph-mon[74418]: Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:48:01 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:01 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:01 compute-0 ceph-mon[74418]: Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:48:01 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 05 09:48:01 compute-0 ceph-mon[74418]: osdmap e51: 3 total, 3 up, 3 in
Dec 05 09:48:01 compute-0 ceph-mgr[74711]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Dec 05 09:48:01 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:48:01 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:48:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:48:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:01 compute-0 ceph-mgr[74711]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:48:01 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:48:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 05 09:48:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:01 compute-0 systemd[1]: libpod-54aea11083071c8c3ecb79d4e6fc1b498278de2b2d3301030a4a9e533c2b3702.scope: Deactivated successfully.
Dec 05 09:48:01 compute-0 podman[92720]: 2025-12-05 09:48:01.470645756 +0000 UTC m=+2.109019849 container died 54aea11083071c8c3ecb79d4e6fc1b498278de2b2d3301030a4a9e533c2b3702 (image=quay.io/ceph/ceph:v19, name=hopeful_beaver, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 09:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5e5124af5cf3dce4990a524d3a82619889e172131e264d9c76f58eccd3c5d13-merged.mount: Deactivated successfully.
Dec 05 09:48:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:48:01 compute-0 podman[92720]: 2025-12-05 09:48:01.515002943 +0000 UTC m=+2.153377016 container remove 54aea11083071c8c3ecb79d4e6fc1b498278de2b2d3301030a4a9e533c2b3702 (image=quay.io/ceph/ceph:v19, name=hopeful_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:48:01 compute-0 systemd[1]: libpod-conmon-54aea11083071c8c3ecb79d4e6fc1b498278de2b2d3301030a4a9e533c2b3702.scope: Deactivated successfully.
Dec 05 09:48:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:48:01 compute-0 sudo[92656]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:01 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.1f deep-scrub starts
Dec 05 09:48:01 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 3.1f deep-scrub ok
Dec 05 09:48:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:48:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:48:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:48:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:48:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:02 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev e6b80fbb-68a6-45b0-9e7a-98cc1d0c2876 (Updating node-exporter deployment (+2 -> 3))
Dec 05 09:48:02 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Dec 05 09:48:02 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Dec 05 09:48:02 compute-0 sudo[93691]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpfenexelcifglaeyumgwhderabtzapb ; /usr/bin/python3'
Dec 05 09:48:02 compute-0 sudo[93691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v12: 198 pgs: 1 creating+peering, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 7 op/s
Dec 05 09:48:02 compute-0 python3[93693]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 09:48:02 compute-0 sudo[93691]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 05 09:48:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 05 09:48:02 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 05 09:48:02 compute-0 ceph-mon[74418]: 3.1e scrub starts
Dec 05 09:48:02 compute-0 ceph-mon[74418]: 3.1e scrub ok
Dec 05 09:48:02 compute-0 ceph-mon[74418]: 3.1c scrub starts
Dec 05 09:48:02 compute-0 ceph-mon[74418]: 3.1c scrub ok
Dec 05 09:48:02 compute-0 ceph-mon[74418]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:48:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:02 compute-0 ceph-mon[74418]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 05 09:48:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:02 compute-0 ceph-mon[74418]: Deploying daemon node-exporter.compute-1 on compute-1
Dec 05 09:48:02 compute-0 ceph-mon[74418]: pgmap v12: 198 pgs: 1 creating+peering, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 7 op/s
Dec 05 09:48:02 compute-0 ceph-mon[74418]: osdmap e52: 3 total, 3 up, 3 in
Dec 05 09:48:02 compute-0 sudo[93764]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ushemduhrvoqjsagjenghgwkbviwiexh ; /usr/bin/python3'
Dec 05 09:48:02 compute-0 sudo[93764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:02 compute-0 python3[93766]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764928081.9660308-37418-108049272609763/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=e07228101fefffc0e2e19f022990975c2f351480 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:48:02 compute-0 sudo[93764]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:02 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 05 09:48:02 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 05 09:48:03 compute-0 sudo[93814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajxzofjeiezawtzyvtdzcocyyrwirloe ; /usr/bin/python3'
Dec 05 09:48:03 compute-0 sudo[93814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:03 compute-0 python3[93816]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:03 compute-0 podman[93817]: 2025-12-05 09:48:03.264761146 +0000 UTC m=+0.022349429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:03 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 05 09:48:03 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 05 09:48:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v14: 198 pgs: 1 creating+peering, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:48:04 compute-0 podman[93817]: 2025-12-05 09:48:04.755637684 +0000 UTC m=+1.513225997 container create fd154fa38ed76a65e1ec4696af0d3410e2b6696b7e8207df86f6118261534e2c (image=quay.io/ceph/ceph:v19, name=great_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:48:04 compute-0 ceph-mon[74418]: 3.1f deep-scrub starts
Dec 05 09:48:04 compute-0 ceph-mon[74418]: 3.1f deep-scrub ok
Dec 05 09:48:04 compute-0 ceph-mon[74418]: 4.1b scrub starts
Dec 05 09:48:04 compute-0 ceph-mon[74418]: 4.1b scrub ok
Dec 05 09:48:04 compute-0 systemd[1]: Started libpod-conmon-fd154fa38ed76a65e1ec4696af0d3410e2b6696b7e8207df86f6118261534e2c.scope.
Dec 05 09:48:04 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.hvnxai(active, since 18s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:48:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecab908cddd8d3aaef0efc9f04bbd5c393181df861345b9727273f3a5477d74/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecab908cddd8d3aaef0efc9f04bbd5c393181df861345b9727273f3a5477d74/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:04 compute-0 podman[93817]: 2025-12-05 09:48:04.831131158 +0000 UTC m=+1.588719451 container init fd154fa38ed76a65e1ec4696af0d3410e2b6696b7e8207df86f6118261534e2c (image=quay.io/ceph/ceph:v19, name=great_beaver, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 09:48:04 compute-0 podman[93817]: 2025-12-05 09:48:04.837218404 +0000 UTC m=+1.594806667 container start fd154fa38ed76a65e1ec4696af0d3410e2b6696b7e8207df86f6118261534e2c (image=quay.io/ceph/ceph:v19, name=great_beaver, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 09:48:04 compute-0 podman[93817]: 2025-12-05 09:48:04.844352297 +0000 UTC m=+1.601940560 container attach fd154fa38ed76a65e1ec4696af0d3410e2b6696b7e8207df86f6118261534e2c (image=quay.io/ceph/ceph:v19, name=great_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 09:48:04 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 05 09:48:04 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 05 09:48:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Dec 05 09:48:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2297261251' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 05 09:48:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2297261251' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 05 09:48:05 compute-0 systemd[1]: libpod-fd154fa38ed76a65e1ec4696af0d3410e2b6696b7e8207df86f6118261534e2c.scope: Deactivated successfully.
Dec 05 09:48:05 compute-0 podman[93817]: 2025-12-05 09:48:05.410562284 +0000 UTC m=+2.168150557 container died fd154fa38ed76a65e1ec4696af0d3410e2b6696b7e8207df86f6118261534e2c (image=quay.io/ceph/ceph:v19, name=great_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 09:48:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-eecab908cddd8d3aaef0efc9f04bbd5c393181df861345b9727273f3a5477d74-merged.mount: Deactivated successfully.
Dec 05 09:48:05 compute-0 podman[93817]: 2025-12-05 09:48:05.459042534 +0000 UTC m=+2.216630797 container remove fd154fa38ed76a65e1ec4696af0d3410e2b6696b7e8207df86f6118261534e2c (image=quay.io/ceph/ceph:v19, name=great_beaver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:05 compute-0 systemd[1]: libpod-conmon-fd154fa38ed76a65e1ec4696af0d3410e2b6696b7e8207df86f6118261534e2c.scope: Deactivated successfully.
Dec 05 09:48:05 compute-0 sudo[93814]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:48:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:48:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 05 09:48:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:05 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Dec 05 09:48:05 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Dec 05 09:48:05 compute-0 ceph-mon[74418]: 5.19 scrub starts
Dec 05 09:48:05 compute-0 ceph-mon[74418]: 5.19 scrub ok
Dec 05 09:48:05 compute-0 ceph-mon[74418]: 4.18 scrub starts
Dec 05 09:48:05 compute-0 ceph-mon[74418]: 4.18 scrub ok
Dec 05 09:48:05 compute-0 ceph-mon[74418]: 7.1b scrub starts
Dec 05 09:48:05 compute-0 ceph-mon[74418]: 7.1b scrub ok
Dec 05 09:48:05 compute-0 ceph-mon[74418]: 5.18 deep-scrub starts
Dec 05 09:48:05 compute-0 ceph-mon[74418]: 5.18 deep-scrub ok
Dec 05 09:48:05 compute-0 ceph-mon[74418]: pgmap v14: 198 pgs: 1 creating+peering, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:48:05 compute-0 ceph-mon[74418]: mgrmap e24: compute-0.hvnxai(active, since 18s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:48:05 compute-0 ceph-mon[74418]: 7.1e scrub starts
Dec 05 09:48:05 compute-0 ceph-mon[74418]: 7.1e scrub ok
Dec 05 09:48:05 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2297261251' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 05 09:48:05 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2297261251' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 05 09:48:05 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:05 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:05 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:05 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Dec 05 09:48:05 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Dec 05 09:48:06 compute-0 sudo[93892]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugdmutqihxurbhqvbuvlxgdtbzdprowq ; /usr/bin/python3'
Dec 05 09:48:06 compute-0 sudo[93892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v15: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:06 compute-0 python3[93894]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:06 compute-0 podman[93896]: 2025-12-05 09:48:06.321643966 +0000 UTC m=+0.051605236 container create 53e90916339e2e3bada8faa730b9aa28a586d3ba5b8326f7130c69f006067879 (image=quay.io/ceph/ceph:v19, name=cranky_wilbur, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:48:06 compute-0 systemd[1]: Started libpod-conmon-53e90916339e2e3bada8faa730b9aa28a586d3ba5b8326f7130c69f006067879.scope.
Dec 05 09:48:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8052bcd316971565c4740b932d5bd57b95f33b300555db0659e714cd4e952e21/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8052bcd316971565c4740b932d5bd57b95f33b300555db0659e714cd4e952e21/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:06 compute-0 podman[93896]: 2025-12-05 09:48:06.301741205 +0000 UTC m=+0.031702495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:06 compute-0 podman[93896]: 2025-12-05 09:48:06.401661733 +0000 UTC m=+0.131623043 container init 53e90916339e2e3bada8faa730b9aa28a586d3ba5b8326f7130c69f006067879 (image=quay.io/ceph/ceph:v19, name=cranky_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:48:06 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:48:06 compute-0 podman[93896]: 2025-12-05 09:48:06.408181581 +0000 UTC m=+0.138142871 container start 53e90916339e2e3bada8faa730b9aa28a586d3ba5b8326f7130c69f006067879 (image=quay.io/ceph/ceph:v19, name=cranky_wilbur, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 05 09:48:06 compute-0 podman[93896]: 2025-12-05 09:48:06.412268672 +0000 UTC m=+0.142229992 container attach 53e90916339e2e3bada8faa730b9aa28a586d3ba5b8326f7130c69f006067879 (image=quay.io/ceph/ceph:v19, name=cranky_wilbur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Dec 05 09:48:06 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 05 09:48:06 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 05 09:48:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:48:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 05 09:48:07 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2379555609' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:48:07 compute-0 cranky_wilbur[93913]: 
Dec 05 09:48:07 compute-0 cranky_wilbur[93913]: {"fsid":"3c63ce0f-5206-59ae-8381-b67d0b6424b5","health":{"status":"HEALTH_ERR","checks":{"BLUESTORE_SLOW_OP_ALERT":{"severity":"HEALTH_WARN","summary":{"message":"1 OSD(s) experiencing slow operations in BlueStore","count":1},"muted":false},"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":93,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":52,"num_osds":3,"num_up_osds":3,"osd_up_since":1764928023,"num_in_osds":3,"osd_in_since":1764928000,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":197},{"state_name":"creating+peering","count":1}],"num_pgs":198,"num_pools":12,"num_objects":194,"data_bytes":464595,"bytes_used":88969216,"bytes_avail":64322957312,"bytes_total":64411926528,"inactive_pgs_ratio":0.0050505050458014011},"fsmap":{"epoch":2,"btime":"2025-12-05T09:47:51:448980+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2025-12-05T09:47:24.904069+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.hvnxai":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.unhddt":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.wewrgp":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","24134":{"start_epoch":5,"start_stamp":"2025-12-05T09:47:24.354013+0000","gid":24134,"addr":"192.168.122.101:0/3300078974","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.oiufcm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864308","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"9f18a6e0-a6ec-473c-a9cd-a3aa558c03b5","zone_name":"default","zonegroup_id":"b382a99c-fac1-4429-b1c0-99673026582b","zonegroup_name":"default"},"task_status":{}},"24142":{"start_epoch":5,"start_stamp":"2025-12-05T09:47:24.283577+0000","gid":24142,"addr":"192.168.122.102:0/3149331825","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.gzawrf","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864320","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"9f18a6e0-a6ec-473c-a9cd-a3aa558c03b5","zone_name":"default","zonegroup_id":"b382a99c-fac1-4429-b1c0-99673026582b","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"e6b80fbb-68a6-45b0-9e7a-98cc1d0c2876":{"message":"Updating node-exporter deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 05 09:48:07 compute-0 systemd[1]: libpod-53e90916339e2e3bada8faa730b9aa28a586d3ba5b8326f7130c69f006067879.scope: Deactivated successfully.
Dec 05 09:48:07 compute-0 podman[93939]: 2025-12-05 09:48:07.69696142 +0000 UTC m=+0.037640046 container died 53e90916339e2e3bada8faa730b9aa28a586d3ba5b8326f7130c69f006067879 (image=quay.io/ceph/ceph:v19, name=cranky_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:48:07 compute-0 ceph-mon[74418]: 6.15 scrub starts
Dec 05 09:48:07 compute-0 ceph-mon[74418]: 6.15 scrub ok
Dec 05 09:48:07 compute-0 ceph-mon[74418]: Deploying daemon node-exporter.compute-2 on compute-2
Dec 05 09:48:07 compute-0 ceph-mon[74418]: 7.18 deep-scrub starts
Dec 05 09:48:07 compute-0 ceph-mon[74418]: 7.18 deep-scrub ok
Dec 05 09:48:07 compute-0 ceph-mon[74418]: pgmap v15: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8052bcd316971565c4740b932d5bd57b95f33b300555db0659e714cd4e952e21-merged.mount: Deactivated successfully.
Dec 05 09:48:07 compute-0 podman[93939]: 2025-12-05 09:48:07.767404507 +0000 UTC m=+0.108083103 container remove 53e90916339e2e3bada8faa730b9aa28a586d3ba5b8326f7130c69f006067879 (image=quay.io/ceph/ceph:v19, name=cranky_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 09:48:07 compute-0 systemd[1]: libpod-conmon-53e90916339e2e3bada8faa730b9aa28a586d3ba5b8326f7130c69f006067879.scope: Deactivated successfully.
Dec 05 09:48:07 compute-0 sudo[93892]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:07 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 05 09:48:07 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 05 09:48:07 compute-0 sudo[93978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okrmrsyfmpwcdqoxmhwnqlavmgutfvkt ; /usr/bin/python3'
Dec 05 09:48:07 compute-0 sudo[93978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:08 compute-0 python3[93980]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v16: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:08 compute-0 podman[93981]: 2025-12-05 09:48:08.152478234 +0000 UTC m=+0.047172564 container create b255ce4627a67872c10a5b4a6a9cc8bdaec10678ec58ffb7a66ed71b9e865bc2 (image=quay.io/ceph/ceph:v19, name=condescending_swanson, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:48:08 compute-0 systemd[1]: Started libpod-conmon-b255ce4627a67872c10a5b4a6a9cc8bdaec10678ec58ffb7a66ed71b9e865bc2.scope.
Dec 05 09:48:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4127cdd3051bded2c26aa5e1ad09f59a3bfb91ff31b368517bd6110b0a6912ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4127cdd3051bded2c26aa5e1ad09f59a3bfb91ff31b368517bd6110b0a6912ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:08 compute-0 podman[93981]: 2025-12-05 09:48:08.131634908 +0000 UTC m=+0.026329258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:08 compute-0 podman[93981]: 2025-12-05 09:48:08.237117427 +0000 UTC m=+0.131811787 container init b255ce4627a67872c10a5b4a6a9cc8bdaec10678ec58ffb7a66ed71b9e865bc2 (image=quay.io/ceph/ceph:v19, name=condescending_swanson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 09:48:08 compute-0 podman[93981]: 2025-12-05 09:48:08.243291266 +0000 UTC m=+0.137985596 container start b255ce4627a67872c10a5b4a6a9cc8bdaec10678ec58ffb7a66ed71b9e865bc2 (image=quay.io/ceph/ceph:v19, name=condescending_swanson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 05 09:48:08 compute-0 podman[93981]: 2025-12-05 09:48:08.247094219 +0000 UTC m=+0.141788569 container attach b255ce4627a67872c10a5b4a6a9cc8bdaec10678ec58ffb7a66ed71b9e865bc2 (image=quay.io/ceph/ceph:v19, name=condescending_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:48:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:48:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:48:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 05 09:48:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:08 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev e6b80fbb-68a6-45b0-9e7a-98cc1d0c2876 (Updating node-exporter deployment (+2 -> 3))
Dec 05 09:48:08 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event e6b80fbb-68a6-45b0-9e7a-98cc1d0c2876 (Updating node-exporter deployment (+2 -> 3)) in 6 seconds
Dec 05 09:48:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 05 09:48:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:48:08 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:48:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:48:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:48:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:48:08 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:08 compute-0 sudo[94019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:48:08 compute-0 sudo[94019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:08 compute-0 sudo[94019]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:08 compute-0 sudo[94044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:48:08 compute-0 sudo[94044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 09:48:08 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557698580' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 09:48:08 compute-0 condescending_swanson[93996]: 
Dec 05 09:48:08 compute-0 condescending_swanson[93996]: {"epoch":3,"fsid":"3c63ce0f-5206-59ae-8381-b67d0b6424b5","modified":"2025-12-05T09:46:29.159401Z","created":"2025-12-05T09:43:16.088283Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Dec 05 09:48:08 compute-0 condescending_swanson[93996]: dumped monmap epoch 3
Dec 05 09:48:08 compute-0 systemd[1]: libpod-b255ce4627a67872c10a5b4a6a9cc8bdaec10678ec58ffb7a66ed71b9e865bc2.scope: Deactivated successfully.
Dec 05 09:48:08 compute-0 podman[93981]: 2025-12-05 09:48:08.722475976 +0000 UTC m=+0.617170306 container died b255ce4627a67872c10a5b4a6a9cc8bdaec10678ec58ffb7a66ed71b9e865bc2 (image=quay.io/ceph/ceph:v19, name=condescending_swanson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4127cdd3051bded2c26aa5e1ad09f59a3bfb91ff31b368517bd6110b0a6912ce-merged.mount: Deactivated successfully.
Dec 05 09:48:08 compute-0 ceph-mon[74418]: 6.2 scrub starts
Dec 05 09:48:08 compute-0 ceph-mon[74418]: 6.2 scrub ok
Dec 05 09:48:08 compute-0 ceph-mon[74418]: 7.6 scrub starts
Dec 05 09:48:08 compute-0 ceph-mon[74418]: 7.6 scrub ok
Dec 05 09:48:08 compute-0 ceph-mon[74418]: 6.d scrub starts
Dec 05 09:48:08 compute-0 ceph-mon[74418]: 6.d scrub ok
Dec 05 09:48:08 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2379555609' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:48:08 compute-0 ceph-mon[74418]: 7.2 scrub starts
Dec 05 09:48:08 compute-0 ceph-mon[74418]: 7.2 scrub ok
Dec 05 09:48:08 compute-0 ceph-mon[74418]: pgmap v16: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:08 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:08 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:08 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:08 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:08 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:48:08 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:48:08 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:08 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2557698580' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 09:48:08 compute-0 podman[93981]: 2025-12-05 09:48:08.77922965 +0000 UTC m=+0.673923990 container remove b255ce4627a67872c10a5b4a6a9cc8bdaec10678ec58ffb7a66ed71b9e865bc2 (image=quay.io/ceph/ceph:v19, name=condescending_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 09:48:08 compute-0 systemd[1]: libpod-conmon-b255ce4627a67872c10a5b4a6a9cc8bdaec10678ec58ffb7a66ed71b9e865bc2.scope: Deactivated successfully.
Dec 05 09:48:08 compute-0 sudo[93978]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:08 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 05 09:48:08 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 05 09:48:08 compute-0 podman[94123]: 2025-12-05 09:48:08.979341405 +0000 UTC m=+0.043243437 container create fa075bfbcf91c8c5ce1706e34ebfe82abba1a5a8d17d41c366358ec68194f534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lederberg, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:48:09 compute-0 systemd[1]: Started libpod-conmon-fa075bfbcf91c8c5ce1706e34ebfe82abba1a5a8d17d41c366358ec68194f534.scope.
Dec 05 09:48:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:09 compute-0 podman[94123]: 2025-12-05 09:48:09.056121805 +0000 UTC m=+0.120023857 container init fa075bfbcf91c8c5ce1706e34ebfe82abba1a5a8d17d41c366358ec68194f534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 09:48:09 compute-0 podman[94123]: 2025-12-05 09:48:08.963242798 +0000 UTC m=+0.027144850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:09 compute-0 podman[94123]: 2025-12-05 09:48:09.062134568 +0000 UTC m=+0.126036600 container start fa075bfbcf91c8c5ce1706e34ebfe82abba1a5a8d17d41c366358ec68194f534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lederberg, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:48:09 compute-0 podman[94123]: 2025-12-05 09:48:09.06552605 +0000 UTC m=+0.129428142 container attach fa075bfbcf91c8c5ce1706e34ebfe82abba1a5a8d17d41c366358ec68194f534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lederberg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 05 09:48:09 compute-0 focused_lederberg[94140]: 167 167
Dec 05 09:48:09 compute-0 systemd[1]: libpod-fa075bfbcf91c8c5ce1706e34ebfe82abba1a5a8d17d41c366358ec68194f534.scope: Deactivated successfully.
Dec 05 09:48:09 compute-0 podman[94123]: 2025-12-05 09:48:09.068131711 +0000 UTC m=+0.132033753 container died fa075bfbcf91c8c5ce1706e34ebfe82abba1a5a8d17d41c366358ec68194f534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lederberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:48:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-12eb3f47539a6c35250c4ea5277f662cdc7e83817487ded31c71131df51dc17e-merged.mount: Deactivated successfully.
Dec 05 09:48:09 compute-0 podman[94123]: 2025-12-05 09:48:09.113080334 +0000 UTC m=+0.176982366 container remove fa075bfbcf91c8c5ce1706e34ebfe82abba1a5a8d17d41c366358ec68194f534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:48:09 compute-0 systemd[1]: libpod-conmon-fa075bfbcf91c8c5ce1706e34ebfe82abba1a5a8d17d41c366358ec68194f534.scope: Deactivated successfully.
Dec 05 09:48:09 compute-0 sudo[94193]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfrtvgiwcuzxgalvjlmsxyqnrduvmhbu ; /usr/bin/python3'
Dec 05 09:48:09 compute-0 sudo[94193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:09 compute-0 podman[94181]: 2025-12-05 09:48:09.314819153 +0000 UTC m=+0.058663287 container create 3af745a20e0123d973bc6a9f8ba0294223a245675d2c705eb3ccbbf8380243b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 05 09:48:09 compute-0 systemd[1]: Started libpod-conmon-3af745a20e0123d973bc6a9f8ba0294223a245675d2c705eb3ccbbf8380243b8.scope.
Dec 05 09:48:09 compute-0 podman[94181]: 2025-12-05 09:48:09.290315416 +0000 UTC m=+0.034159590 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360ac9e1efdc13602b1bd41cc98d25f04b478e16231b3e6ff3f2787bab067020/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360ac9e1efdc13602b1bd41cc98d25f04b478e16231b3e6ff3f2787bab067020/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360ac9e1efdc13602b1bd41cc98d25f04b478e16231b3e6ff3f2787bab067020/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360ac9e1efdc13602b1bd41cc98d25f04b478e16231b3e6ff3f2787bab067020/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360ac9e1efdc13602b1bd41cc98d25f04b478e16231b3e6ff3f2787bab067020/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:09 compute-0 podman[94181]: 2025-12-05 09:48:09.426559754 +0000 UTC m=+0.170403938 container init 3af745a20e0123d973bc6a9f8ba0294223a245675d2c705eb3ccbbf8380243b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:48:09 compute-0 python3[94200]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:09 compute-0 podman[94181]: 2025-12-05 09:48:09.434830169 +0000 UTC m=+0.178674293 container start 3af745a20e0123d973bc6a9f8ba0294223a245675d2c705eb3ccbbf8380243b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:48:09 compute-0 podman[94181]: 2025-12-05 09:48:09.43852105 +0000 UTC m=+0.182365204 container attach 3af745a20e0123d973bc6a9f8ba0294223a245675d2c705eb3ccbbf8380243b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ptolemy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:09 compute-0 podman[94211]: 2025-12-05 09:48:09.500188348 +0000 UTC m=+0.050145786 container create 6ce0d1e3d1ad8085410bd40e687bfaa783e74f632668061f54767b6fec7f3a16 (image=quay.io/ceph/ceph:v19, name=sweet_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 09:48:09 compute-0 systemd[1]: Started libpod-conmon-6ce0d1e3d1ad8085410bd40e687bfaa783e74f632668061f54767b6fec7f3a16.scope.
Dec 05 09:48:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f9b214fb6c4bd71d6da108861a5502774e784ac38fa31568c86ae968de5dd3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f9b214fb6c4bd71d6da108861a5502774e784ac38fa31568c86ae968de5dd3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:09 compute-0 podman[94211]: 2025-12-05 09:48:09.469495862 +0000 UTC m=+0.019453320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:09 compute-0 podman[94211]: 2025-12-05 09:48:09.577104661 +0000 UTC m=+0.127062119 container init 6ce0d1e3d1ad8085410bd40e687bfaa783e74f632668061f54767b6fec7f3a16 (image=quay.io/ceph/ceph:v19, name=sweet_antonelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 09:48:09 compute-0 podman[94211]: 2025-12-05 09:48:09.582618151 +0000 UTC m=+0.132575589 container start 6ce0d1e3d1ad8085410bd40e687bfaa783e74f632668061f54767b6fec7f3a16 (image=quay.io/ceph/ceph:v19, name=sweet_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 09:48:09 compute-0 podman[94211]: 2025-12-05 09:48:09.586967339 +0000 UTC m=+0.136924787 container attach 6ce0d1e3d1ad8085410bd40e687bfaa783e74f632668061f54767b6fec7f3a16 (image=quay.io/ceph/ceph:v19, name=sweet_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 09:48:09 compute-0 wonderful_ptolemy[94206]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:48:09 compute-0 wonderful_ptolemy[94206]: --> All data devices are unavailable
Dec 05 09:48:09 compute-0 systemd[1]: libpod-3af745a20e0123d973bc6a9f8ba0294223a245675d2c705eb3ccbbf8380243b8.scope: Deactivated successfully.
Dec 05 09:48:09 compute-0 conmon[94206]: conmon 3af745a20e0123d973bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3af745a20e0123d973bc6a9f8ba0294223a245675d2c705eb3ccbbf8380243b8.scope/container/memory.events
Dec 05 09:48:09 compute-0 podman[94181]: 2025-12-05 09:48:09.822713704 +0000 UTC m=+0.566557828 container died 3af745a20e0123d973bc6a9f8ba0294223a245675d2c705eb3ccbbf8380243b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 09:48:09 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Dec 05 09:48:09 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Dec 05 09:48:09 compute-0 ceph-mon[74418]: 6.3 deep-scrub starts
Dec 05 09:48:09 compute-0 ceph-mon[74418]: 6.3 deep-scrub ok
Dec 05 09:48:09 compute-0 ceph-mon[74418]: 7.3 scrub starts
Dec 05 09:48:09 compute-0 ceph-mon[74418]: 7.3 scrub ok
Dec 05 09:48:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-360ac9e1efdc13602b1bd41cc98d25f04b478e16231b3e6ff3f2787bab067020-merged.mount: Deactivated successfully.
Dec 05 09:48:09 compute-0 podman[94181]: 2025-12-05 09:48:09.952170077 +0000 UTC m=+0.696014211 container remove 3af745a20e0123d973bc6a9f8ba0294223a245675d2c705eb3ccbbf8380243b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ptolemy, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 09:48:09 compute-0 systemd[1]: libpod-conmon-3af745a20e0123d973bc6a9f8ba0294223a245675d2c705eb3ccbbf8380243b8.scope: Deactivated successfully.
Dec 05 09:48:10 compute-0 sudo[94044]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec 05 09:48:10 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3270365430' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 05 09:48:10 compute-0 sweet_antonelli[94226]: [client.openstack]
Dec 05 09:48:10 compute-0 sweet_antonelli[94226]:         key = AQAKqTJpAAAAABAAnLxgItl+ZCeyHPuze9T3Cw==
Dec 05 09:48:10 compute-0 sweet_antonelli[94226]:         caps mgr = "allow *"
Dec 05 09:48:10 compute-0 sweet_antonelli[94226]:         caps mon = "profile rbd"
Dec 05 09:48:10 compute-0 sweet_antonelli[94226]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 05 09:48:10 compute-0 sudo[94274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:48:10 compute-0 sudo[94274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:10 compute-0 systemd[1]: libpod-6ce0d1e3d1ad8085410bd40e687bfaa783e74f632668061f54767b6fec7f3a16.scope: Deactivated successfully.
Dec 05 09:48:10 compute-0 podman[94211]: 2025-12-05 09:48:10.088553658 +0000 UTC m=+0.638511096 container died 6ce0d1e3d1ad8085410bd40e687bfaa783e74f632668061f54767b6fec7f3a16 (image=quay.io/ceph/ceph:v19, name=sweet_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Dec 05 09:48:10 compute-0 sudo[94274]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-03f9b214fb6c4bd71d6da108861a5502774e784ac38fa31568c86ae968de5dd3-merged.mount: Deactivated successfully.
Dec 05 09:48:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v17: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:10 compute-0 podman[94211]: 2025-12-05 09:48:10.140206744 +0000 UTC m=+0.690164182 container remove 6ce0d1e3d1ad8085410bd40e687bfaa783e74f632668061f54767b6fec7f3a16 (image=quay.io/ceph/ceph:v19, name=sweet_antonelli, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 05 09:48:10 compute-0 sudo[94308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:48:10 compute-0 systemd[1]: libpod-conmon-6ce0d1e3d1ad8085410bd40e687bfaa783e74f632668061f54767b6fec7f3a16.scope: Deactivated successfully.
Dec 05 09:48:10 compute-0 sudo[94308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:10 compute-0 sudo[94193]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:10 compute-0 podman[94378]: 2025-12-05 09:48:10.559119673 +0000 UTC m=+0.054385211 container create cc4ae4e33964a4d6e5ae516edd124f7bba8f7c4647f5e98fa1a6ba6e495dd7b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_agnesi, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:10 compute-0 systemd[1]: Started libpod-conmon-cc4ae4e33964a4d6e5ae516edd124f7bba8f7c4647f5e98fa1a6ba6e495dd7b6.scope.
Dec 05 09:48:10 compute-0 podman[94378]: 2025-12-05 09:48:10.530630578 +0000 UTC m=+0.025896216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:10 compute-0 podman[94378]: 2025-12-05 09:48:10.643854539 +0000 UTC m=+0.139120167 container init cc4ae4e33964a4d6e5ae516edd124f7bba8f7c4647f5e98fa1a6ba6e495dd7b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_agnesi, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:48:10 compute-0 podman[94378]: 2025-12-05 09:48:10.651426645 +0000 UTC m=+0.146692193 container start cc4ae4e33964a4d6e5ae516edd124f7bba8f7c4647f5e98fa1a6ba6e495dd7b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_agnesi, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:48:10 compute-0 flamboyant_agnesi[94395]: 167 167
Dec 05 09:48:10 compute-0 systemd[1]: libpod-cc4ae4e33964a4d6e5ae516edd124f7bba8f7c4647f5e98fa1a6ba6e495dd7b6.scope: Deactivated successfully.
Dec 05 09:48:10 compute-0 podman[94378]: 2025-12-05 09:48:10.655438074 +0000 UTC m=+0.150703652 container attach cc4ae4e33964a4d6e5ae516edd124f7bba8f7c4647f5e98fa1a6ba6e495dd7b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 09:48:10 compute-0 podman[94378]: 2025-12-05 09:48:10.655825055 +0000 UTC m=+0.151090613 container died cc4ae4e33964a4d6e5ae516edd124f7bba8f7c4647f5e98fa1a6ba6e495dd7b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_agnesi, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:48:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cc99d4d6387689e9ad3703a4e4223268dc9e0737e50e5f10b9784703551fc10-merged.mount: Deactivated successfully.
Dec 05 09:48:10 compute-0 podman[94378]: 2025-12-05 09:48:10.696914813 +0000 UTC m=+0.192180371 container remove cc4ae4e33964a4d6e5ae516edd124f7bba8f7c4647f5e98fa1a6ba6e495dd7b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_agnesi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:10 compute-0 systemd[1]: libpod-conmon-cc4ae4e33964a4d6e5ae516edd124f7bba8f7c4647f5e98fa1a6ba6e495dd7b6.scope: Deactivated successfully.
Dec 05 09:48:10 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 05 09:48:10 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 05 09:48:10 compute-0 podman[94419]: 2025-12-05 09:48:10.885152405 +0000 UTC m=+0.049535569 container create ecc8ddbdf1f3509f2316131d2c7a1a88524bfcac491ac72ee9596a76e4ea346b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:48:10 compute-0 systemd[1]: Started libpod-conmon-ecc8ddbdf1f3509f2316131d2c7a1a88524bfcac491ac72ee9596a76e4ea346b.scope.
Dec 05 09:48:10 compute-0 ceph-mon[74418]: 6.e scrub starts
Dec 05 09:48:10 compute-0 ceph-mon[74418]: 6.e scrub ok
Dec 05 09:48:10 compute-0 ceph-mon[74418]: 7.4 deep-scrub starts
Dec 05 09:48:10 compute-0 ceph-mon[74418]: 7.4 deep-scrub ok
Dec 05 09:48:10 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3270365430' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 05 09:48:10 compute-0 ceph-mon[74418]: pgmap v17: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:10 compute-0 podman[94419]: 2025-12-05 09:48:10.868180483 +0000 UTC m=+0.032563677 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382a4ed7218e8f79285d9440fd99296c079e50ab0da8012072a215ddcd16eb49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382a4ed7218e8f79285d9440fd99296c079e50ab0da8012072a215ddcd16eb49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382a4ed7218e8f79285d9440fd99296c079e50ab0da8012072a215ddcd16eb49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382a4ed7218e8f79285d9440fd99296c079e50ab0da8012072a215ddcd16eb49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:10 compute-0 podman[94419]: 2025-12-05 09:48:10.998564961 +0000 UTC m=+0.162948165 container init ecc8ddbdf1f3509f2316131d2c7a1a88524bfcac491ac72ee9596a76e4ea346b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 09:48:11 compute-0 podman[94419]: 2025-12-05 09:48:11.006400554 +0000 UTC m=+0.170783758 container start ecc8ddbdf1f3509f2316131d2c7a1a88524bfcac491ac72ee9596a76e4ea346b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:48:11 compute-0 podman[94419]: 2025-12-05 09:48:11.010679001 +0000 UTC m=+0.175062185 container attach ecc8ddbdf1f3509f2316131d2c7a1a88524bfcac491ac72ee9596a76e4ea346b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:48:11 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 13 completed events
Dec 05 09:48:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:48:11 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]: {
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:     "1": [
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:         {
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             "devices": [
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "/dev/loop3"
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             ],
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             "lv_name": "ceph_lv0",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             "lv_size": "21470642176",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             "name": "ceph_lv0",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             "tags": {
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.cluster_name": "ceph",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.crush_device_class": "",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.encrypted": "0",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.osd_id": "1",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.type": "block",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.vdo": "0",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:                 "ceph.with_tpm": "0"
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             },
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             "type": "block",
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:             "vg_name": "ceph_vg0"
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:         }
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]:     ]
Dec 05 09:48:11 compute-0 nostalgic_kirch[94435]: }
Dec 05 09:48:11 compute-0 systemd[1]: libpod-ecc8ddbdf1f3509f2316131d2c7a1a88524bfcac491ac72ee9596a76e4ea346b.scope: Deactivated successfully.
Dec 05 09:48:11 compute-0 podman[94419]: 2025-12-05 09:48:11.307273521 +0000 UTC m=+0.471656695 container died ecc8ddbdf1f3509f2316131d2c7a1a88524bfcac491ac72ee9596a76e4ea346b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:48:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-382a4ed7218e8f79285d9440fd99296c079e50ab0da8012072a215ddcd16eb49-merged.mount: Deactivated successfully.
Dec 05 09:48:11 compute-0 podman[94419]: 2025-12-05 09:48:11.351554596 +0000 UTC m=+0.515937760 container remove ecc8ddbdf1f3509f2316131d2c7a1a88524bfcac491ac72ee9596a76e4ea346b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:48:11 compute-0 systemd[1]: libpod-conmon-ecc8ddbdf1f3509f2316131d2c7a1a88524bfcac491ac72ee9596a76e4ea346b.scope: Deactivated successfully.
Dec 05 09:48:11 compute-0 sudo[94308]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:11 compute-0 sudo[94531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:48:11 compute-0 sudo[94531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:11 compute-0 sudo[94531]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:11 compute-0 sudo[94564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:48:11 compute-0 sudo[94564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:11 compute-0 sudo[94653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmaznbxeljcrlfilyhjpugsybahrorei ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764928091.2730002-37490-8422676443957/async_wrapper.py j18537626654 30 /home/zuul/.ansible/tmp/ansible-tmp-1764928091.2730002-37490-8422676443957/AnsiballZ_command.py _'
Dec 05 09:48:11 compute-0 sudo[94653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:11 compute-0 ansible-async_wrapper.py[94655]: Invoked with j18537626654 30 /home/zuul/.ansible/tmp/ansible-tmp-1764928091.2730002-37490-8422676443957/AnsiballZ_command.py _
Dec 05 09:48:11 compute-0 ansible-async_wrapper.py[94682]: Starting module and watcher
Dec 05 09:48:11 compute-0 ansible-async_wrapper.py[94682]: Start watching 94683 (30)
Dec 05 09:48:11 compute-0 ansible-async_wrapper.py[94683]: Start module (94683)
Dec 05 09:48:11 compute-0 ansible-async_wrapper.py[94655]: Return async_wrapper task started.
Dec 05 09:48:11 compute-0 sudo[94653]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:11 compute-0 python3[94686]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:11 compute-0 podman[94699]: 2025-12-05 09:48:11.861869572 +0000 UTC m=+0.022164434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v18: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:48:12 compute-0 sudo[94770]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iroxnldiycdolksixnlrxmkkzuznfefz ; /usr/bin/python3'
Dec 05 09:48:12 compute-0 sudo[94770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:13 compute-0 podman[94699]: 2025-12-05 09:48:13.018478045 +0000 UTC m=+1.178772907 container create 389de70ab819a85382eb13ef27a011eb38b9f1b50d7827ea063ba369894ba172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 09:48:13 compute-0 python3[94772]: ansible-ansible.legacy.async_status Invoked with jid=j18537626654.94655 mode=status _async_dir=/root/.ansible_async
Dec 05 09:48:13 compute-0 sudo[94770]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:13 compute-0 ceph-mon[74418]: 6.19 scrub starts
Dec 05 09:48:13 compute-0 ceph-mon[74418]: 6.19 scrub ok
Dec 05 09:48:13 compute-0 ceph-mon[74418]: 7.f scrub starts
Dec 05 09:48:13 compute-0 ceph-mon[74418]: 7.f scrub ok
Dec 05 09:48:13 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:13 compute-0 systemd[1]: Started libpod-conmon-389de70ab819a85382eb13ef27a011eb38b9f1b50d7827ea063ba369894ba172.scope.
Dec 05 09:48:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:13 compute-0 podman[94699]: 2025-12-05 09:48:13.223125653 +0000 UTC m=+1.383420495 container init 389de70ab819a85382eb13ef27a011eb38b9f1b50d7827ea063ba369894ba172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shamir, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 09:48:13 compute-0 podman[94699]: 2025-12-05 09:48:13.232669913 +0000 UTC m=+1.392964755 container start 389de70ab819a85382eb13ef27a011eb38b9f1b50d7827ea063ba369894ba172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shamir, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:13 compute-0 peaceful_shamir[94775]: 167 167
Dec 05 09:48:13 compute-0 systemd[1]: libpod-389de70ab819a85382eb13ef27a011eb38b9f1b50d7827ea063ba369894ba172.scope: Deactivated successfully.
Dec 05 09:48:13 compute-0 podman[94713]: 2025-12-05 09:48:13.238454371 +0000 UTC m=+1.295666148 container create d6fa4391ecd2ff3c8de0429bd6d9182751bcf62d1e2edbc21c51ead352ebb328 (image=quay.io/ceph/ceph:v19, name=affectionate_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 09:48:13 compute-0 podman[94699]: 2025-12-05 09:48:13.244808193 +0000 UTC m=+1.405103035 container attach 389de70ab819a85382eb13ef27a011eb38b9f1b50d7827ea063ba369894ba172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shamir, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:48:13 compute-0 podman[94699]: 2025-12-05 09:48:13.245577774 +0000 UTC m=+1.405872616 container died 389de70ab819a85382eb13ef27a011eb38b9f1b50d7827ea063ba369894ba172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:48:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-82d03c0a7131b37f193db89575cde2bd55f80291f4eaa64f5a5d6ff67700cf90-merged.mount: Deactivated successfully.
Dec 05 09:48:13 compute-0 podman[94713]: 2025-12-05 09:48:13.19726609 +0000 UTC m=+1.254477887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:13 compute-0 systemd[1]: Started libpod-conmon-d6fa4391ecd2ff3c8de0429bd6d9182751bcf62d1e2edbc21c51ead352ebb328.scope.
Dec 05 09:48:13 compute-0 podman[94699]: 2025-12-05 09:48:13.300316024 +0000 UTC m=+1.460610866 container remove 389de70ab819a85382eb13ef27a011eb38b9f1b50d7827ea063ba369894ba172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shamir, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:13 compute-0 systemd[1]: libpod-conmon-389de70ab819a85382eb13ef27a011eb38b9f1b50d7827ea063ba369894ba172.scope: Deactivated successfully.
Dec 05 09:48:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2d590d35b5be39bed7b059c079887cf181fb94257fa8c8410f45f4259638a49/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2d590d35b5be39bed7b059c079887cf181fb94257fa8c8410f45f4259638a49/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:13 compute-0 podman[94713]: 2025-12-05 09:48:13.327907015 +0000 UTC m=+1.385118822 container init d6fa4391ecd2ff3c8de0429bd6d9182751bcf62d1e2edbc21c51ead352ebb328 (image=quay.io/ceph/ceph:v19, name=affectionate_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Dec 05 09:48:13 compute-0 podman[94713]: 2025-12-05 09:48:13.334448673 +0000 UTC m=+1.391660450 container start d6fa4391ecd2ff3c8de0429bd6d9182751bcf62d1e2edbc21c51ead352ebb328 (image=quay.io/ceph/ceph:v19, name=affectionate_mendel, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 05 09:48:13 compute-0 podman[94713]: 2025-12-05 09:48:13.338289997 +0000 UTC m=+1.395501794 container attach d6fa4391ecd2ff3c8de0429bd6d9182751bcf62d1e2edbc21c51ead352ebb328 (image=quay.io/ceph/ceph:v19, name=affectionate_mendel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 09:48:13 compute-0 podman[94808]: 2025-12-05 09:48:13.472458158 +0000 UTC m=+0.043812763 container create 716e5e134a7bf71dd72172289752c535eedeb6dc2bdff951455abd45cb82b3a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mestorf, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:13 compute-0 systemd[1]: Started libpod-conmon-716e5e134a7bf71dd72172289752c535eedeb6dc2bdff951455abd45cb82b3a4.scope.
Dec 05 09:48:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86f06f087a1b06f3ff8d9f9f0582e755e0f7a1b5739162638304daec871bae8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86f06f087a1b06f3ff8d9f9f0582e755e0f7a1b5739162638304daec871bae8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86f06f087a1b06f3ff8d9f9f0582e755e0f7a1b5739162638304daec871bae8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86f06f087a1b06f3ff8d9f9f0582e755e0f7a1b5739162638304daec871bae8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:13 compute-0 podman[94808]: 2025-12-05 09:48:13.452622538 +0000 UTC m=+0.023977163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:13 compute-0 podman[94808]: 2025-12-05 09:48:13.566059955 +0000 UTC m=+0.137414590 container init 716e5e134a7bf71dd72172289752c535eedeb6dc2bdff951455abd45cb82b3a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mestorf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 09:48:13 compute-0 podman[94808]: 2025-12-05 09:48:13.572968103 +0000 UTC m=+0.144322708 container start 716e5e134a7bf71dd72172289752c535eedeb6dc2bdff951455abd45cb82b3a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mestorf, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 05 09:48:13 compute-0 podman[94808]: 2025-12-05 09:48:13.576305133 +0000 UTC m=+0.147659768 container attach 716e5e134a7bf71dd72172289752c535eedeb6dc2bdff951455abd45cb82b3a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mestorf, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 09:48:13 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:48:13 compute-0 affectionate_mendel[94795]: 
Dec 05 09:48:13 compute-0 affectionate_mendel[94795]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 05 09:48:13 compute-0 systemd[1]: libpod-d6fa4391ecd2ff3c8de0429bd6d9182751bcf62d1e2edbc21c51ead352ebb328.scope: Deactivated successfully.
Dec 05 09:48:13 compute-0 podman[94713]: 2025-12-05 09:48:13.753783983 +0000 UTC m=+1.810995780 container died d6fa4391ecd2ff3c8de0429bd6d9182751bcf62d1e2edbc21c51ead352ebb328 (image=quay.io/ceph/ceph:v19, name=affectionate_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 09:48:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2d590d35b5be39bed7b059c079887cf181fb94257fa8c8410f45f4259638a49-merged.mount: Deactivated successfully.
Dec 05 09:48:13 compute-0 podman[94713]: 2025-12-05 09:48:13.802304523 +0000 UTC m=+1.859516310 container remove d6fa4391ecd2ff3c8de0429bd6d9182751bcf62d1e2edbc21c51ead352ebb328 (image=quay.io/ceph/ceph:v19, name=affectionate_mendel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:48:13 compute-0 systemd[1]: libpod-conmon-d6fa4391ecd2ff3c8de0429bd6d9182751bcf62d1e2edbc21c51ead352ebb328.scope: Deactivated successfully.
Dec 05 09:48:13 compute-0 ansible-async_wrapper.py[94683]: Module complete (94683)
Dec 05 09:48:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v19: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:14 compute-0 ceph-mon[74418]: 6.1a scrub starts
Dec 05 09:48:14 compute-0 ceph-mon[74418]: 6.1a scrub ok
Dec 05 09:48:14 compute-0 ceph-mon[74418]: pgmap v18: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:14 compute-0 lvm[94953]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:48:14 compute-0 lvm[94953]: VG ceph_vg0 finished
Dec 05 09:48:14 compute-0 sudo[94978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfozcmwpqlxszujiquuvvylqjgsecynr ; /usr/bin/python3'
Dec 05 09:48:14 compute-0 sudo[94978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:14 compute-0 pensive_mestorf[94843]: {}
Dec 05 09:48:14 compute-0 systemd[1]: libpod-716e5e134a7bf71dd72172289752c535eedeb6dc2bdff951455abd45cb82b3a4.scope: Deactivated successfully.
Dec 05 09:48:14 compute-0 systemd[1]: libpod-716e5e134a7bf71dd72172289752c535eedeb6dc2bdff951455abd45cb82b3a4.scope: Consumed 1.114s CPU time.
Dec 05 09:48:14 compute-0 podman[94808]: 2025-12-05 09:48:14.279132158 +0000 UTC m=+0.850486763 container died 716e5e134a7bf71dd72172289752c535eedeb6dc2bdff951455abd45cb82b3a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mestorf, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:48:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f86f06f087a1b06f3ff8d9f9f0582e755e0f7a1b5739162638304daec871bae8-merged.mount: Deactivated successfully.
Dec 05 09:48:14 compute-0 podman[94808]: 2025-12-05 09:48:14.325825139 +0000 UTC m=+0.897179734 container remove 716e5e134a7bf71dd72172289752c535eedeb6dc2bdff951455abd45cb82b3a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mestorf, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:48:14 compute-0 systemd[1]: libpod-conmon-716e5e134a7bf71dd72172289752c535eedeb6dc2bdff951455abd45cb82b3a4.scope: Deactivated successfully.
Dec 05 09:48:14 compute-0 sudo[94564]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:14 compute-0 python3[94981]: ansible-ansible.legacy.async_status Invoked with jid=j18537626654.94655 mode=status _async_dir=/root/.ansible_async
Dec 05 09:48:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:48:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:14 compute-0 sudo[94978]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:48:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:14 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 25721ddf-6752-4d87-bf7e-e8caf627a06b (Updating rgw.rgw deployment (+1 -> 3))
Dec 05 09:48:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pppcpu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 05 09:48:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pppcpu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:48:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pppcpu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:48:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 05 09:48:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:48:14 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:14 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.pppcpu on compute-0
Dec 05 09:48:14 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.pppcpu on compute-0
Dec 05 09:48:14 compute-0 sudo[94999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:48:14 compute-0 sudo[94999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:14 compute-0 sudo[94999]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:14 compute-0 sudo[95046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:48:14 compute-0 sudo[95089]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooauiztdjoctxkmqybldyfaphnzlkjpf ; /usr/bin/python3'
Dec 05 09:48:14 compute-0 sudo[95046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:14 compute-0 sudo[95089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:14 compute-0 python3[95093]: ansible-ansible.legacy.async_status Invoked with jid=j18537626654.94655 mode=cleanup _async_dir=/root/.ansible_async
Dec 05 09:48:14 compute-0 sudo[95089]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:14 compute-0 podman[95135]: 2025-12-05 09:48:14.949702365 +0000 UTC m=+0.058903463 container create 7007f54f50b40b8bc034b16b041dced6beea2e0e53bc4ea2cf9bdf6f627587c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 09:48:14 compute-0 systemd[1]: Started libpod-conmon-7007f54f50b40b8bc034b16b041dced6beea2e0e53bc4ea2cf9bdf6f627587c4.scope.
Dec 05 09:48:15 compute-0 podman[95135]: 2025-12-05 09:48:14.925096686 +0000 UTC m=+0.034297774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:15 compute-0 podman[95135]: 2025-12-05 09:48:15.044585317 +0000 UTC m=+0.153786395 container init 7007f54f50b40b8bc034b16b041dced6beea2e0e53bc4ea2cf9bdf6f627587c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:48:15 compute-0 podman[95135]: 2025-12-05 09:48:15.05240934 +0000 UTC m=+0.161610398 container start 7007f54f50b40b8bc034b16b041dced6beea2e0e53bc4ea2cf9bdf6f627587c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:15 compute-0 podman[95135]: 2025-12-05 09:48:15.056188183 +0000 UTC m=+0.165389251 container attach 7007f54f50b40b8bc034b16b041dced6beea2e0e53bc4ea2cf9bdf6f627587c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 09:48:15 compute-0 determined_kilby[95151]: 167 167
Dec 05 09:48:15 compute-0 systemd[1]: libpod-7007f54f50b40b8bc034b16b041dced6beea2e0e53bc4ea2cf9bdf6f627587c4.scope: Deactivated successfully.
Dec 05 09:48:15 compute-0 podman[95135]: 2025-12-05 09:48:15.058901787 +0000 UTC m=+0.168102845 container died 7007f54f50b40b8bc034b16b041dced6beea2e0e53bc4ea2cf9bdf6f627587c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:48:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab35d63fef0aa52265264b67eb940f540b076d1cf994055b295bb78f2ae8fa1a-merged.mount: Deactivated successfully.
Dec 05 09:48:15 compute-0 podman[95135]: 2025-12-05 09:48:15.134853473 +0000 UTC m=+0.244054531 container remove 7007f54f50b40b8bc034b16b041dced6beea2e0e53bc4ea2cf9bdf6f627587c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:48:15 compute-0 systemd[1]: libpod-conmon-7007f54f50b40b8bc034b16b041dced6beea2e0e53bc4ea2cf9bdf6f627587c4.scope: Deactivated successfully.
Dec 05 09:48:15 compute-0 systemd[1]: Reloading.
Dec 05 09:48:15 compute-0 systemd-rc-local-generator[95220]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:48:15 compute-0 systemd-sysv-generator[95223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:48:15 compute-0 sudo[95194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tshtxkgdcfwjikvskubekolixqveoaof ; /usr/bin/python3'
Dec 05 09:48:15 compute-0 sudo[95194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:15 compute-0 systemd[1]: Reloading.
Dec 05 09:48:15 compute-0 ceph-mon[74418]: from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:48:15 compute-0 ceph-mon[74418]: pgmap v19: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:15 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:15 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:15 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pppcpu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:48:15 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pppcpu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:48:15 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:15 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:15 compute-0 systemd-sysv-generator[95262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:48:15 compute-0 systemd-rc-local-generator[95258]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:48:15 compute-0 python3[95231]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:15 compute-0 podman[95269]: 2025-12-05 09:48:15.642793925 +0000 UTC m=+0.053855117 container create a88df3962ec042b2ffd9c8233eb43ef8a99fa943b8b9a2423d508ff796c40e98 (image=quay.io/ceph/ceph:v19, name=nervous_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 05 09:48:15 compute-0 podman[95269]: 2025-12-05 09:48:15.626018118 +0000 UTC m=+0.037079340 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:15 compute-0 systemd[1]: Started libpod-conmon-a88df3962ec042b2ffd9c8233eb43ef8a99fa943b8b9a2423d508ff796c40e98.scope.
Dec 05 09:48:15 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.pppcpu for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:48:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c179255151677b734e7af0ce60a84ebe5b55079f1595eebbb72b3603634944ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c179255151677b734e7af0ce60a84ebe5b55079f1595eebbb72b3603634944ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:15 compute-0 podman[95269]: 2025-12-05 09:48:15.777586772 +0000 UTC m=+0.188647984 container init a88df3962ec042b2ffd9c8233eb43ef8a99fa943b8b9a2423d508ff796c40e98 (image=quay.io/ceph/ceph:v19, name=nervous_golick, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:48:15 compute-0 podman[95269]: 2025-12-05 09:48:15.785381855 +0000 UTC m=+0.196443047 container start a88df3962ec042b2ffd9c8233eb43ef8a99fa943b8b9a2423d508ff796c40e98 (image=quay.io/ceph/ceph:v19, name=nervous_golick, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:15 compute-0 podman[95269]: 2025-12-05 09:48:15.788818518 +0000 UTC m=+0.199879710 container attach a88df3962ec042b2ffd9c8233eb43ef8a99fa943b8b9a2423d508ff796c40e98 (image=quay.io/ceph/ceph:v19, name=nervous_golick, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 09:48:15 compute-0 podman[95354]: 2025-12-05 09:48:15.9899021 +0000 UTC m=+0.065584475 container create cd9f5d9419fe80e6ffe97db2da1e385c939028b502c3c9b74048de70166b1e37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-rgw-rgw-compute-0-pppcpu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 05 09:48:16 compute-0 podman[95354]: 2025-12-05 09:48:15.95829625 +0000 UTC m=+0.033978645 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9519f68652da2d11ebffb00f144b9f1a66b142b798487c299cadde3ed320e81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9519f68652da2d11ebffb00f144b9f1a66b142b798487c299cadde3ed320e81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9519f68652da2d11ebffb00f144b9f1a66b142b798487c299cadde3ed320e81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9519f68652da2d11ebffb00f144b9f1a66b142b798487c299cadde3ed320e81/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.pppcpu supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:16 compute-0 podman[95354]: 2025-12-05 09:48:16.070440962 +0000 UTC m=+0.146123367 container init cd9f5d9419fe80e6ffe97db2da1e385c939028b502c3c9b74048de70166b1e37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-rgw-rgw-compute-0-pppcpu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:48:16 compute-0 podman[95354]: 2025-12-05 09:48:16.075603412 +0000 UTC m=+0.151285807 container start cd9f5d9419fe80e6ffe97db2da1e385c939028b502c3c9b74048de70166b1e37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-rgw-rgw-compute-0-pppcpu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 09:48:16 compute-0 bash[95354]: cd9f5d9419fe80e6ffe97db2da1e385c939028b502c3c9b74048de70166b1e37
Dec 05 09:48:16 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.pppcpu for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:48:16 compute-0 radosgw[95374]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 05 09:48:16 compute-0 radosgw[95374]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec 05 09:48:16 compute-0 radosgw[95374]: framework: beast
Dec 05 09:48:16 compute-0 radosgw[95374]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 05 09:48:16 compute-0 radosgw[95374]: init_numa not setting numa affinity
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v20: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:16 compute-0 sudo[95046]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:48:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:48:16 compute-0 nervous_golick[95286]: 
Dec 05 09:48:16 compute-0 nervous_golick[95286]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 05 09:48:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 05 09:48:16 compute-0 systemd[1]: libpod-a88df3962ec042b2ffd9c8233eb43ef8a99fa943b8b9a2423d508ff796c40e98.scope: Deactivated successfully.
Dec 05 09:48:16 compute-0 podman[95269]: 2025-12-05 09:48:16.168202392 +0000 UTC m=+0.579263594 container died a88df3962ec042b2ffd9c8233eb43ef8a99fa943b8b9a2423d508ff796c40e98 (image=quay.io/ceph/ceph:v19, name=nervous_golick, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:48:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 25721ddf-6752-4d87-bf7e-e8caf627a06b (Updating rgw.rgw deployment (+1 -> 3))
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 25721ddf-6752-4d87-bf7e-e8caf627a06b (Updating rgw.rgw deployment (+1 -> 3)) in 2 seconds
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 05 09:48:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 05 09:48:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 05 09:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c179255151677b734e7af0ce60a84ebe5b55079f1595eebbb72b3603634944ef-merged.mount: Deactivated successfully.
Dec 05 09:48:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 916b8b2e-dc94-4ae0-a583-ca1955a4c7fb (Updating mds.cephfs deployment (+3 -> 3))
Dec 05 09:48:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.qyxerc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:48:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.qyxerc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:48:16 compute-0 podman[95269]: 2025-12-05 09:48:16.222865749 +0000 UTC m=+0.633926941 container remove a88df3962ec042b2ffd9c8233eb43ef8a99fa943b8b9a2423d508ff796c40e98 (image=quay.io/ceph/ceph:v19, name=nervous_golick, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:48:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.qyxerc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:48:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:48:16 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.qyxerc on compute-2
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.qyxerc on compute-2
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:48:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:48:16 compute-0 systemd[1]: libpod-conmon-a88df3962ec042b2ffd9c8233eb43ef8a99fa943b8b9a2423d508ff796c40e98.scope: Deactivated successfully.
Dec 05 09:48:16 compute-0 sudo[95194]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:16 compute-0 radosgw[95374]: v1 topic migration: starting v1 topic migration..
Dec 05 09:48:16 compute-0 radosgw[95374]: LDAP not started since no server URIs were provided in the configuration.
Dec 05 09:48:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-rgw-rgw-compute-0-pppcpu[95370]: 2025-12-05T09:48:16.414+0000 7f147c109980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 05 09:48:16 compute-0 radosgw[95374]: v1 topic migration: finished v1 topic migration
Dec 05 09:48:16 compute-0 radosgw[95374]: framework: beast
Dec 05 09:48:16 compute-0 radosgw[95374]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 05 09:48:16 compute-0 radosgw[95374]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 05 09:48:16 compute-0 radosgw[95374]: starting handler: beast
Dec 05 09:48:16 compute-0 ceph-mon[74418]: Deploying daemon rgw.rgw.compute-0.pppcpu on compute-0
Dec 05 09:48:16 compute-0 ceph-mon[74418]: pgmap v20: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:16 compute-0 ceph-mon[74418]: from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:48:16 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:16 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:16 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:16 compute-0 ceph-mon[74418]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 05 09:48:16 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:16 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:16 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.qyxerc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 05 09:48:16 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.qyxerc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 05 09:48:16 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:16 compute-0 ceph-mon[74418]: Deploying daemon mds.cephfs.compute-2.qyxerc on compute-2
Dec 05 09:48:16 compute-0 radosgw[95374]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 09:48:16 compute-0 radosgw[95374]: mgrc service_daemon_register rgw.14544 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.pppcpu,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=9f18a6e0-a6ec-473c-a9cd-a3aa558c03b5,zone_name=default,zonegroup_id=b382a99c-fac1-4429-b1c0-99673026582b,zonegroup_name=default}
Dec 05 09:48:16 compute-0 ansible-async_wrapper.py[94682]: Done in kid B.
Dec 05 09:48:17 compute-0 sudo[96031]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afiybhhtumabpoiaghrqoaynehggwskl ; /usr/bin/python3'
Dec 05 09:48:17 compute-0 sudo[96031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:17 compute-0 python3[96033]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:17 compute-0 podman[96034]: 2025-12-05 09:48:17.338742834 +0000 UTC m=+0.048924942 container create c16be2d68d6dddc80a7b8879b6d6ebc6b67df004754b821deb3170f9c1e48042 (image=quay.io/ceph/ceph:v19, name=dazzling_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:48:17 compute-0 systemd[1]: Started libpod-conmon-c16be2d68d6dddc80a7b8879b6d6ebc6b67df004754b821deb3170f9c1e48042.scope.
Dec 05 09:48:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a448de82fc37b84bdea3055ce3287ec9248954a0db32f07317aa604ffb9b2096/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a448de82fc37b84bdea3055ce3287ec9248954a0db32f07317aa604ffb9b2096/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:17 compute-0 podman[96034]: 2025-12-05 09:48:17.32169944 +0000 UTC m=+0.031881568 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:17 compute-0 podman[96034]: 2025-12-05 09:48:17.424627911 +0000 UTC m=+0.134810039 container init c16be2d68d6dddc80a7b8879b6d6ebc6b67df004754b821deb3170f9c1e48042 (image=quay.io/ceph/ceph:v19, name=dazzling_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:48:17 compute-0 podman[96034]: 2025-12-05 09:48:17.430883101 +0000 UTC m=+0.141065229 container start c16be2d68d6dddc80a7b8879b6d6ebc6b67df004754b821deb3170f9c1e48042 (image=quay.io/ceph/ceph:v19, name=dazzling_maxwell, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:48:17 compute-0 podman[96034]: 2025-12-05 09:48:17.436938936 +0000 UTC m=+0.147121044 container attach c16be2d68d6dddc80a7b8879b6d6ebc6b67df004754b821deb3170f9c1e48042 (image=quay.io/ceph/ceph:v19, name=dazzling_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 09:48:17 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:48:17 compute-0 dazzling_maxwell[96049]: 
Dec 05 09:48:17 compute-0 dazzling_maxwell[96049]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec 05 09:48:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:48:17 compute-0 systemd[1]: libpod-c16be2d68d6dddc80a7b8879b6d6ebc6b67df004754b821deb3170f9c1e48042.scope: Deactivated successfully.
Dec 05 09:48:17 compute-0 conmon[96049]: conmon c16be2d68d6dddc80a7b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c16be2d68d6dddc80a7b8879b6d6ebc6b67df004754b821deb3170f9c1e48042.scope/container/memory.events
Dec 05 09:48:17 compute-0 podman[96034]: 2025-12-05 09:48:17.901142648 +0000 UTC m=+0.611324756 container died c16be2d68d6dddc80a7b8879b6d6ebc6b67df004754b821deb3170f9c1e48042 (image=quay.io/ceph/ceph:v19, name=dazzling_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a448de82fc37b84bdea3055ce3287ec9248954a0db32f07317aa604ffb9b2096-merged.mount: Deactivated successfully.
Dec 05 09:48:17 compute-0 podman[96034]: 2025-12-05 09:48:17.942779211 +0000 UTC m=+0.652961319 container remove c16be2d68d6dddc80a7b8879b6d6ebc6b67df004754b821deb3170f9c1e48042 (image=quay.io/ceph/ceph:v19, name=dazzling_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:48:17 compute-0 systemd[1]: libpod-conmon-c16be2d68d6dddc80a7b8879b6d6ebc6b67df004754b821deb3170f9c1e48042.scope: Deactivated successfully.
Dec 05 09:48:17 compute-0 sudo[96031]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v21: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 38 op/s
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e3 new map
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-12-05T09:48:18:211173+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-05T09:47:51.448919+0000
                                           modified        2025-12-05T09:47:51.448919+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.qyxerc{-1:24184} state up:standby seq 1 addr [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] compat {c=[1],r=[1],i=[1fff]}]
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] up:boot
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] as mds.0
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.qyxerc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.qyxerc"} v 0)
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.qyxerc"}]: dispatch
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e3 all = 0
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e4 new map
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-12-05T09:48:18:232595+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-05T09:47:51.448919+0000
                                           modified        2025-12-05T09:48:18.232587+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24184}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.qyxerc{0:24184} state up:creating seq 1 addr [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:creating}
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hfgtsk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hfgtsk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hfgtsk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 05 09:48:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:18 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.hfgtsk on compute-0
Dec 05 09:48:18 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.hfgtsk on compute-0
Dec 05 09:48:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.qyxerc is now active in filesystem cephfs as rank 0
Dec 05 09:48:18 compute-0 sudo[96087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:48:18 compute-0 sudo[96087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:18 compute-0 sudo[96087]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:18 compute-0 sudo[96112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:48:18 compute-0 sudo[96112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:18 compute-0 podman[96177]: 2025-12-05 09:48:18.739993004 +0000 UTC m=+0.038468818 container create 2d78a4099b96c1e5a19abdbdcdb372905427d8d8a46a80241af9d68ec236a087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_buck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 09:48:18 compute-0 systemd[1]: Started libpod-conmon-2d78a4099b96c1e5a19abdbdcdb372905427d8d8a46a80241af9d68ec236a087.scope.
Dec 05 09:48:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:18 compute-0 podman[96177]: 2025-12-05 09:48:18.80707937 +0000 UTC m=+0.105555234 container init 2d78a4099b96c1e5a19abdbdcdb372905427d8d8a46a80241af9d68ec236a087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_buck, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 09:48:18 compute-0 podman[96177]: 2025-12-05 09:48:18.81447844 +0000 UTC m=+0.112954264 container start 2d78a4099b96c1e5a19abdbdcdb372905427d8d8a46a80241af9d68ec236a087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 09:48:18 compute-0 podman[96177]: 2025-12-05 09:48:18.723265259 +0000 UTC m=+0.021741093 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:18 compute-0 crazy_buck[96195]: 167 167
Dec 05 09:48:18 compute-0 systemd[1]: libpod-2d78a4099b96c1e5a19abdbdcdb372905427d8d8a46a80241af9d68ec236a087.scope: Deactivated successfully.
Dec 05 09:48:18 compute-0 podman[96177]: 2025-12-05 09:48:18.820073593 +0000 UTC m=+0.118549497 container attach 2d78a4099b96c1e5a19abdbdcdb372905427d8d8a46a80241af9d68ec236a087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_buck, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:18 compute-0 podman[96177]: 2025-12-05 09:48:18.820475983 +0000 UTC m=+0.118951797 container died 2d78a4099b96c1e5a19abdbdcdb372905427d8d8a46a80241af9d68ec236a087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:48:18 compute-0 sudo[96222]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beianwsafwcyrfslkphimdgykxaawbge ; /usr/bin/python3'
Dec 05 09:48:18 compute-0 sudo[96222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e68ffe1a2686f8044dc1b5983bae56b6047c5a92cebb31a6f312f885efa8019-merged.mount: Deactivated successfully.
Dec 05 09:48:18 compute-0 podman[96177]: 2025-12-05 09:48:18.88867418 +0000 UTC m=+0.187149994 container remove 2d78a4099b96c1e5a19abdbdcdb372905427d8d8a46a80241af9d68ec236a087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 09:48:18 compute-0 systemd[1]: libpod-conmon-2d78a4099b96c1e5a19abdbdcdb372905427d8d8a46a80241af9d68ec236a087.scope: Deactivated successfully.
Dec 05 09:48:18 compute-0 systemd[1]: Reloading.
Dec 05 09:48:18 compute-0 python3[96231]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:19 compute-0 systemd-rc-local-generator[96260]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:48:19 compute-0 systemd-sysv-generator[96267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:48:19 compute-0 podman[96240]: 2025-12-05 09:48:19.130301604 +0000 UTC m=+0.116747338 container create ec2f1078faaf14ff58647adaae98677d205efbaa4ae7193beb7d19f744013227 (image=quay.io/ceph/ceph:v19, name=sad_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 05 09:48:19 compute-0 podman[96240]: 2025-12-05 09:48:19.035944437 +0000 UTC m=+0.022390191 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:19 compute-0 systemd[1]: Started libpod-conmon-ec2f1078faaf14ff58647adaae98677d205efbaa4ae7193beb7d19f744013227.scope.
Dec 05 09:48:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3d4848bb6407984a29ff22e3233f9e6e617ce51d660aacd56ca74cb69fd557/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3d4848bb6407984a29ff22e3233f9e6e617ce51d660aacd56ca74cb69fd557/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:19 compute-0 systemd[1]: Reloading.
Dec 05 09:48:19 compute-0 podman[96240]: 2025-12-05 09:48:19.318398753 +0000 UTC m=+0.304844507 container init ec2f1078faaf14ff58647adaae98677d205efbaa4ae7193beb7d19f744013227 (image=quay.io/ceph/ceph:v19, name=sad_varahamihira, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 05 09:48:19 compute-0 ceph-mon[74418]: from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:48:19 compute-0 ceph-mon[74418]: pgmap v21: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 38 op/s
Dec 05 09:48:19 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:19 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:19 compute-0 ceph-mon[74418]: mds.? [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] up:boot
Dec 05 09:48:19 compute-0 ceph-mon[74418]: daemon mds.cephfs.compute-2.qyxerc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 05 09:48:19 compute-0 ceph-mon[74418]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 05 09:48:19 compute-0 ceph-mon[74418]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 05 09:48:19 compute-0 ceph-mon[74418]: fsmap cephfs:0 1 up:standby
Dec 05 09:48:19 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.qyxerc"}]: dispatch
Dec 05 09:48:19 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:19 compute-0 ceph-mon[74418]: fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:creating}
Dec 05 09:48:19 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hfgtsk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 05 09:48:19 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hfgtsk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 05 09:48:19 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:19 compute-0 ceph-mon[74418]: Deploying daemon mds.cephfs.compute-0.hfgtsk on compute-0
Dec 05 09:48:19 compute-0 ceph-mon[74418]: daemon mds.cephfs.compute-2.qyxerc is now active in filesystem cephfs as rank 0
Dec 05 09:48:19 compute-0 podman[96240]: 2025-12-05 09:48:19.327620003 +0000 UTC m=+0.314065727 container start ec2f1078faaf14ff58647adaae98677d205efbaa4ae7193beb7d19f744013227 (image=quay.io/ceph/ceph:v19, name=sad_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:19 compute-0 systemd-sysv-generator[96326]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:48:19 compute-0 systemd-rc-local-generator[96323]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:48:19 compute-0 podman[96240]: 2025-12-05 09:48:19.460379126 +0000 UTC m=+0.446824880 container attach ec2f1078faaf14ff58647adaae98677d205efbaa4ae7193beb7d19f744013227 (image=quay.io/ceph/ceph:v19, name=sad_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 05 09:48:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e5 new map
Dec 05 09:48:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-12-05T09:48:19:295274+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-05T09:47:51.448919+0000
                                           modified        2025-12-05T09:48:19.295271+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24184}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24184 members: 24184
                                           [mds.cephfs.compute-2.qyxerc{0:24184} state up:active seq 2 addr [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 05 09:48:19 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] up:active
Dec 05 09:48:19 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active}
Dec 05 09:48:19 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.hfgtsk for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:48:20 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.14559 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:48:20 compute-0 sad_varahamihira[96291]: 
Dec 05 09:48:20 compute-0 sad_varahamihira[96291]: [{"container_id": "b271b4e2be81", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.08%", "created": "2025-12-05T09:44:17.225129Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T09:47:57.720328Z", "memory_usage": 7790919, "ports": [], "service_name": "crash", "started": "2025-12-05T09:44:17.135611Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@crash.compute-0", "version": "19.2.3"}, {"container_id": "383728bd0076", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.46%", "created": "2025-12-05T09:45:17.028980Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-05T09:47:51.836647Z", "memory_usage": 7821328, "ports": [], "service_name": "crash", "started": "2025-12-05T09:45:16.638784Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@crash.compute-1", "version": "19.2.3"}, {"container_id": "843ed3f790f8", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.31%", "created": "2025-12-05T09:46:38.082176Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-05T09:47:48.192129Z", "memory_usage": 7812939, "ports": [], "service_name": "crash", "started": "2025-12-05T09:46:37.949582Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-2.qyxerc", "daemon_name": "mds.cephfs.compute-2.qyxerc", "daemon_type": "mds", "events": ["2025-12-05T09:48:18.218883Z daemon:mds.cephfs.compute-2.qyxerc [INFO] \"Deployed mds.cephfs.compute-2.qyxerc on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "95284dae4ab8", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "21.86%", "created": "2025-12-05T09:43:29.261820Z", "daemon_id": "compute-0.hvnxai", "daemon_name": "mgr.compute-0.hvnxai", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T09:47:57.720151Z", "memory_usage": 541274931, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-05T09:43:28.206477Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@mgr.compute-0.hvnxai", "version": "19.2.3"}, {"container_id": "306ca8f78b92", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "47.39%", "created": "2025-12-05T09:46:36.128279Z", "daemon_id": "compute-1.unhddt", "daemon_name": "mgr.compute-1.unhddt", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-05T09:47:51.836922Z", "memory_usage": 504469913, "ports": [8765], "service_name": "mgr", "started": "2025-12-05T09:46:36.011351Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@mgr.compute-1.unhddt", "version": "19.2.3"}, {"container_id": "d22e0e900a70", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "36.29%", "created": "2025-12-05T09:46:30.463987Z", "daemon_id": "compute-2.wewrgp", "daemon_name": "mgr.compute-2.wewrgp", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-05T09:47:48.191999Z", "memory_usage": 503631052, "ports": [8765], "service_name": "mgr", "started": "2025-12-05T09:46:30.341278Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@mgr.compute-2.wewrgp", "version": "19.2.3"}, {"container_id": "07237ca89b59", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.20%", "created": "2025-12-05T09:43:21.345453Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T09:47:57.719980Z", "memory_request": 2147483648, "memory_usage": 56832819, "ports": [], "service_name": "mon", "started": "2025-12-05T09:43:25.154044Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@mon.compute-0", "version": "19.2.3"}, {"container_id": "c6c19b1ebfdc", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.22%", "created": "2025-12-05T09:46:27.035002Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-05T09:47:51.836849Z", "memory_request": 2147483648, "memory_usage": 46640660, "ports": [], "service_name": "mon", "started": "2025-12-05T09:46:26.944146Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@mon.compute-1", "version": "19.2.3"}, {"container_id": "c0a6b65e9dcf", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.48%", "created": "2025-12-05T09:46:23.002801Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-05T09:47:48.191827Z", "memory_request": 2147483648, "memory_usage": 47804579, "ports": [], "service_name": "mon", "started": "2025-12-05T09:46:22.882847Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@mon.compute-2", "version": "19.2.3"}, {"container_id": "dc2521f476ac", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80", "quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.13%", "created": "2025-12-05T09:47:32.771257Z", "daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T09:47:57.720569Z", "memory_usage": 5922357, "ports": [9100], "service_name": "node-exporter", "started": "2025-12-05T09:47:32.691300Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@node-exporter.compute-0", "version": "1.7.0"}, {"daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "events": ["2025-12-05T09:48:05.623707Z daemon:node-exporter.compute-1 [INFO] \"Deployed node-exporter.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2025-12-05T09:48:08.454430Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "2e7da1a95f32", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.21%", "created": "2025-12-05T09:45:33.957982Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T09:47:57.720467Z", "memory_request": 4294967296, "memory_usage": 76084674, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-05T09:45:33.854823Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@osd.1", "version": "19.2.3"}, {"container_id": "e07ad27929b4", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.80%", "created": "2025-12-05T09:45:35.265428Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-05T09:47:51.836778Z", "memory_request": 4294967296, "memory_usage": 73326919, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-05T09:45:35.171788Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@osd.0", "version": "19.2.3"}, {"container_id": "363697a8e4bf", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "3.21%", "created": "2025-12-05T09:46:52.063512Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-05T09:47:48.192265Z", "memory_request": 4294967296, "memory_usage": 67276636, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-05T09:46:51.952262Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@osd.2", "version": "19.2.3"}, {"daemon_id": "rgw.compute-0.pppcpu", "daemon_name": "rgw.rgw.compute-0.pppcpu", "daemon_type": "rgw", "events": ["2025-12-05T09:48:16.162863Z daemon:rgw.rgw.compute-0.pppcpu [INFO] \"Deployed rgw.rgw.compute-0.pppcpu on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}, {"container_id": "f8bd5ec45950", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.28%", "created": "2025-12-05T09:47:14.681323Z", "daemon_id": "rgw.compute-1.oiufcm", "daemon_name": "rgw.rgw.compute-1.oiufcm", "daemon_type": "rgw", "hostname": "compute-1", "ip": "192.168.122.101", "is_active": false, "last_refresh": "2025-12-05T09:47:51.837007Z", "memory_usage": 104134082, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-12-05T09:47:14.561645Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@rgw.rgw.compute-1.oiufcm", "version": "19.2.3"}, {"container_id": "d9726757dc46", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.11%", "created": "2025-12-05T09:47:12.760007Z", "daemon_id": "rgw.compute-2.gzawrf", "daemon_name": "rgw.rgw.compute-2.gzawrf", "daemon_type": "rgw", "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "last_refresh": "2025-12-05T09:47:48.192373Z", "memory_usage": 102288588, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-12-05T09:47:12.648595Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@rgw.rgw.compute-2.gzawrf", "version": "19.2.3"}]
Dec 05 09:48:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v22: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 38 op/s
Dec 05 09:48:20 compute-0 systemd[1]: libpod-ec2f1078faaf14ff58647adaae98677d205efbaa4ae7193beb7d19f744013227.scope: Deactivated successfully.
Dec 05 09:48:20 compute-0 podman[96240]: 2025-12-05 09:48:20.135963 +0000 UTC m=+1.122408734 container died ec2f1078faaf14ff58647adaae98677d205efbaa4ae7193beb7d19f744013227 (image=quay.io/ceph/ceph:v19, name=sad_varahamihira, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:48:20 compute-0 rsyslogd[1004]: message too long (14938) with configured size 8096, begin of message is: [{"container_id": "b271b4e2be81", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 05 09:48:21 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 14 completed events
Dec 05 09:48:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Dec 05 09:48:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Dec 05 09:48:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Dec 05 09:48:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:48:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:48:26 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.531776905s, txc = 0x563f25e56f00
Dec 05 09:48:26 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 6.531656742s
Dec 05 09:48:26 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 6.531656742s
Dec 05 09:48:26 compute-0 ceph-mon[74418]: mds.? [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] up:active
Dec 05 09:48:26 compute-0 ceph-mon[74418]: fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active}
Dec 05 09:48:26 compute-0 ceph-mon[74418]: from='client.14559 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:48:26 compute-0 ceph-mon[74418]: pgmap v22: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 38 op/s
Dec 05 09:48:26 compute-0 podman[96402]: 2025-12-05 09:48:26.802172615 +0000 UTC m=+6.666199056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef3d4848bb6407984a29ff22e3233f9e6e617ce51d660aacd56ca74cb69fd557-merged.mount: Deactivated successfully.
Dec 05 09:48:27 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:27 compute-0 podman[96240]: 2025-12-05 09:48:27.954213743 +0000 UTC m=+8.940659487 container remove ec2f1078faaf14ff58647adaae98677d205efbaa4ae7193beb7d19f744013227 (image=quay.io/ceph/ceph:v19, name=sad_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:27 compute-0 systemd[1]: libpod-conmon-ec2f1078faaf14ff58647adaae98677d205efbaa4ae7193beb7d19f744013227.scope: Deactivated successfully.
Dec 05 09:48:27 compute-0 sudo[96222]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Dec 05 09:48:28 compute-0 podman[96402]: 2025-12-05 09:48:28.151906952 +0000 UTC m=+8.015933363 container create 779d91058d609da8cfd8679e477a40eded5b7acf22b4268660271431940b80b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mds-cephfs-compute-0-hfgtsk, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 05 09:48:28 compute-0 ceph-mon[74418]: pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Dec 05 09:48:28 compute-0 ceph-mon[74418]: pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Dec 05 09:48:28 compute-0 ceph-mon[74418]: pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Dec 05 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba7ec348d4d9ddcbeb46d801111220f7389bccebd9c88c3c6b0be14115faaa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba7ec348d4d9ddcbeb46d801111220f7389bccebd9c88c3c6b0be14115faaa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba7ec348d4d9ddcbeb46d801111220f7389bccebd9c88c3c6b0be14115faaa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba7ec348d4d9ddcbeb46d801111220f7389bccebd9c88c3c6b0be14115faaa6/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.hfgtsk supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:28 compute-0 podman[96402]: 2025-12-05 09:48:28.91955276 +0000 UTC m=+8.783579201 container init 779d91058d609da8cfd8679e477a40eded5b7acf22b4268660271431940b80b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mds-cephfs-compute-0-hfgtsk, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 09:48:28 compute-0 podman[96402]: 2025-12-05 09:48:28.925322157 +0000 UTC m=+8.789348568 container start 779d91058d609da8cfd8679e477a40eded5b7acf22b4268660271431940b80b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mds-cephfs-compute-0-hfgtsk, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:48:28 compute-0 sudo[96458]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlieyevhjrabzrnsgdhhwwlbomgcgqif ; /usr/bin/python3'
Dec 05 09:48:28 compute-0 sudo[96458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:29 compute-0 python3[96462]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:29 compute-0 bash[96402]: 779d91058d609da8cfd8679e477a40eded5b7acf22b4268660271431940b80b0
Dec 05 09:48:29 compute-0 ceph-mds[96460]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 09:48:29 compute-0 ceph-mds[96460]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec 05 09:48:29 compute-0 ceph-mds[96460]: main not setting numa affinity
Dec 05 09:48:29 compute-0 ceph-mds[96460]: pidfile_write: ignore empty --pid-file
Dec 05 09:48:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mds-cephfs-compute-0-hfgtsk[96432]: starting mds.cephfs.compute-0.hfgtsk at 
Dec 05 09:48:29 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.hfgtsk for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:48:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Updating MDS map to version 5 from mon.0
Dec 05 09:48:29 compute-0 podman[96471]: 2025-12-05 09:48:29.216684176 +0000 UTC m=+0.105146812 container create 0918c33253fd56698662ed6130d769ad71cc4bebc9d8b8aa1681509051ccfc2b (image=quay.io/ceph/ceph:v19, name=pensive_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:48:29 compute-0 podman[96471]: 2025-12-05 09:48:29.138986972 +0000 UTC m=+0.027449628 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:29 compute-0 sudo[96112]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e6 new map
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2025-12-05T09:48:29:152554+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-05T09:47:51.448919+0000
                                           modified        2025-12-05T09:48:19.295271+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24184}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24184 members: 24184
                                           [mds.cephfs.compute-2.qyxerc{0:24184} state up:active seq 2 addr [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.hfgtsk{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/274001102,v1:192.168.122.100:6807/274001102] compat {c=[1],r=[1],i=[1fff]}]
Dec 05 09:48:29 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:29 compute-0 ceph-mon[74418]: pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Dec 05 09:48:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Updating MDS map to version 6 from mon.0
Dec 05 09:48:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Monitors have assigned me to become a standby
Dec 05 09:48:29 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/274001102,v1:192.168.122.100:6807/274001102] up:boot
Dec 05 09:48:29 compute-0 systemd[1]: Started libpod-conmon-0918c33253fd56698662ed6130d769ad71cc4bebc9d8b8aa1681509051ccfc2b.scope.
Dec 05 09:48:29 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 1 up:standby
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.hfgtsk"} v 0)
Dec 05 09:48:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.hfgtsk"}]: dispatch
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e6 all = 0
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e7 new map
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2025-12-05T09:48:29:347986+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-05T09:47:51.448919+0000
                                           modified        2025-12-05T09:48:19.295271+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24184}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24184 members: 24184
                                           [mds.cephfs.compute-2.qyxerc{0:24184} state up:active seq 2 addr [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.hfgtsk{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/274001102,v1:192.168.122.100:6807/274001102] compat {c=[1],r=[1],i=[1fff]}]
Dec 05 09:48:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:29 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 1 up:standby
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:48:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7b7e5454f7c6bb1d867bb00de3f50a8afde29fd730065ee5e82283f674f13b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7b7e5454f7c6bb1d867bb00de3f50a8afde29fd730065ee5e82283f674f13b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 05 09:48:29 compute-0 podman[96471]: 2025-12-05 09:48:29.401928037 +0000 UTC m=+0.290390693 container init 0918c33253fd56698662ed6130d769ad71cc4bebc9d8b8aa1681509051ccfc2b (image=quay.io/ceph/ceph:v19, name=pensive_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:48:29 compute-0 podman[96471]: 2025-12-05 09:48:29.410564902 +0000 UTC m=+0.299027538 container start 0918c33253fd56698662ed6130d769ad71cc4bebc9d8b8aa1681509051ccfc2b (image=quay.io/ceph/ceph:v19, name=pensive_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Dec 05 09:48:29 compute-0 podman[96471]: 2025-12-05 09:48:29.414730265 +0000 UTC m=+0.303192891 container attach 0918c33253fd56698662ed6130d769ad71cc4bebc9d8b8aa1681509051ccfc2b (image=quay.io/ceph/ceph:v19, name=pensive_sinoussi, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:48:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hxfsnw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 09:48:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hxfsnw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 05 09:48:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hxfsnw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:48:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:29 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.hxfsnw on compute-1
Dec 05 09:48:29 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.hxfsnw on compute-1
Dec 05 09:48:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 05 09:48:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033675432' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:48:29 compute-0 pensive_sinoussi[96497]: 
Dec 05 09:48:29 compute-0 pensive_sinoussi[96497]: {"fsid":"3c63ce0f-5206-59ae-8381-b67d0b6424b5","health":{"status":"HEALTH_WARN","checks":{"BLUESTORE_SLOW_OP_ALERT":{"severity":"HEALTH_WARN","summary":{"message":"1 OSD(s) experiencing slow operations in BlueStore","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":115,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":52,"num_osds":3,"num_up_osds":3,"osd_up_since":1764928023,"num_in_osds":3,"osd_in_since":1764928000,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":216,"data_bytes":467025,"bytes_used":89088000,"bytes_avail":64322838528,"bytes_total":64411926528,"read_bytes_sec":54159,"write_bytes_sec":1194,"read_op_per_sec":52,"write_op_per_sec":32},"fsmap":{"epoch":7,"btime":"2025-12-05T09:48:29:347986+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.qyxerc","status":"up:active","gid":24184}],"up:standby":1},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":6,"modified":"2025-12-05T09:48:18.125134+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.hvnxai":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.unhddt":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.wewrgp":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14544":{"start_epoch":6,"start_stamp":"2025-12-05T09:48:16.512374+0000","gid":14544,"addr":"192.168.122.100:0/420970584","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.pppcpu","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864316","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"9f18a6e0-a6ec-473c-a9cd-a3aa558c03b5","zone_name":"default","zonegroup_id":"b382a99c-fac1-4429-b1c0-99673026582b","zonegroup_name":"default"},"task_status":{}},"24134":{"start_epoch":5,"start_stamp":"2025-12-05T09:47:24.354013+0000","gid":24134,"addr":"192.168.122.101:0/3300078974","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.oiufcm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864308","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"9f18a6e0-a6ec-473c-a9cd-a3aa558c03b5","zone_name":"default","zonegroup_id":"b382a99c-fac1-4429-b1c0-99673026582b","zonegroup_name":"default"},"task_status":{}},"24142":{"start_epoch":5,"start_stamp":"2025-12-05T09:47:24.283577+0000","gid":24142,"addr":"192.168.122.102:0/3149331825","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.gzawrf","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864320","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"9f18a6e0-a6ec-473c-a9cd-a3aa558c03b5","zone_name":"default","zonegroup_id":"b382a99c-fac1-4429-b1c0-99673026582b","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"916b8b2e-dc94-4ae0-a583-ca1955a4c7fb":{"message":"Updating mds.cephfs deployment (+3 -> 3) (2s)\n      [=========...................] (remaining: 4s)","progress":0.3333333432674408,"add_to_ceph_s":true}}}
Dec 05 09:48:29 compute-0 systemd[1]: libpod-0918c33253fd56698662ed6130d769ad71cc4bebc9d8b8aa1681509051ccfc2b.scope: Deactivated successfully.
Dec 05 09:48:29 compute-0 podman[96471]: 2025-12-05 09:48:29.882693708 +0000 UTC m=+0.771156354 container died 0918c33253fd56698662ed6130d769ad71cc4bebc9d8b8aa1681509051ccfc2b (image=quay.io/ceph/ceph:v19, name=pensive_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 05 09:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf7b7e5454f7c6bb1d867bb00de3f50a8afde29fd730065ee5e82283f674f13b-merged.mount: Deactivated successfully.
Dec 05 09:48:29 compute-0 podman[96471]: 2025-12-05 09:48:29.924173217 +0000 UTC m=+0.812635853 container remove 0918c33253fd56698662ed6130d769ad71cc4bebc9d8b8aa1681509051ccfc2b (image=quay.io/ceph/ceph:v19, name=pensive_sinoussi, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:48:29 compute-0 systemd[1]: libpod-conmon-0918c33253fd56698662ed6130d769ad71cc4bebc9d8b8aa1681509051ccfc2b.scope: Deactivated successfully.
Dec 05 09:48:29 compute-0 sudo[96458]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 47 op/s
Dec 05 09:48:30 compute-0 sudo[96555]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bktijsaxdhqfdjxepdpqzyfwycbfhuok ; /usr/bin/python3'
Dec 05 09:48:30 compute-0 sudo[96555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:31 compute-0 python3[96557]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:31 compute-0 podman[96558]: 2025-12-05 09:48:31.185981492 +0000 UTC m=+0.044702488 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:48:32 compute-0 podman[96558]: 2025-12-05 09:48:32.060731325 +0000 UTC m=+0.919452291 container create cbf5184deb55d605530e54d894d2227de3eeed0c62fce37869fc1d32a54dd515 (image=quay.io/ceph/ceph:v19, name=hungry_greider, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:48:32 compute-0 ceph-mon[74418]: mds.? [v2:192.168.122.100:6806/274001102,v1:192.168.122.100:6807/274001102] up:boot
Dec 05 09:48:32 compute-0 ceph-mon[74418]: fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 1 up:standby
Dec 05 09:48:32 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.hfgtsk"}]: dispatch
Dec 05 09:48:32 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:32 compute-0 ceph-mon[74418]: fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 1 up:standby
Dec 05 09:48:32 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:32 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:32 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hxfsnw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 05 09:48:32 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hxfsnw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 05 09:48:32 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:32 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3033675432' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 09:48:32 compute-0 systemd[1]: Started libpod-conmon-cbf5184deb55d605530e54d894d2227de3eeed0c62fce37869fc1d32a54dd515.scope.
Dec 05 09:48:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652b9eca5280ece9bea8d5a27b685e3a551495d6371aeb31fe14d42eaf6478ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652b9eca5280ece9bea8d5a27b685e3a551495d6371aeb31fe14d42eaf6478ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 47 op/s
Dec 05 09:48:32 compute-0 podman[96558]: 2025-12-05 09:48:32.137618277 +0000 UTC m=+0.996339263 container init cbf5184deb55d605530e54d894d2227de3eeed0c62fce37869fc1d32a54dd515 (image=quay.io/ceph/ceph:v19, name=hungry_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 09:48:32 compute-0 podman[96558]: 2025-12-05 09:48:32.144747291 +0000 UTC m=+1.003468257 container start cbf5184deb55d605530e54d894d2227de3eeed0c62fce37869fc1d32a54dd515 (image=quay.io/ceph/ceph:v19, name=hungry_greider, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 09:48:32 compute-0 podman[96558]: 2025-12-05 09:48:32.148636157 +0000 UTC m=+1.007357123 container attach cbf5184deb55d605530e54d894d2227de3eeed0c62fce37869fc1d32a54dd515 (image=quay.io/ceph/ceph:v19, name=hungry_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:48:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 05 09:48:32 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4151448535' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:48:32 compute-0 hungry_greider[96574]: 
Dec 05 09:48:32 compute-0 systemd[1]: libpod-cbf5184deb55d605530e54d894d2227de3eeed0c62fce37869fc1d32a54dd515.scope: Deactivated successfully.
Dec 05 09:48:32 compute-0 hungry_greider[96574]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.hvnxai/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.unhddt/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.wewrgp/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.pppcpu","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.oiufcm","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.gzawrf","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec 05 09:48:32 compute-0 podman[96558]: 2025-12-05 09:48:32.572575212 +0000 UTC m=+1.431296208 container died cbf5184deb55d605530e54d894d2227de3eeed0c62fce37869fc1d32a54dd515 (image=quay.io/ceph/ceph:v19, name=hungry_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 09:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-652b9eca5280ece9bea8d5a27b685e3a551495d6371aeb31fe14d42eaf6478ed-merged.mount: Deactivated successfully.
Dec 05 09:48:32 compute-0 podman[96558]: 2025-12-05 09:48:32.953990542 +0000 UTC m=+1.812711508 container remove cbf5184deb55d605530e54d894d2227de3eeed0c62fce37869fc1d32a54dd515 (image=quay.io/ceph/ceph:v19, name=hungry_greider, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:48:32 compute-0 sudo[96555]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:33 compute-0 systemd[1]: libpod-conmon-cbf5184deb55d605530e54d894d2227de3eeed0c62fce37869fc1d32a54dd515.scope: Deactivated successfully.
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check update: 2 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: Deploying daemon mds.cephfs.compute-1.hxfsnw on compute-1
Dec 05 09:48:33 compute-0 ceph-mon[74418]: pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 47 op/s
Dec 05 09:48:33 compute-0 ceph-mon[74418]: pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 47 op/s
Dec 05 09:48:33 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4151448535' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 916b8b2e-dc94-4ae0-a583-ca1955a4c7fb (Updating mds.cephfs deployment (+3 -> 3))
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 916b8b2e-dc94-4ae0-a583-ca1955a4c7fb (Updating mds.cephfs deployment (+3 -> 3)) in 17 seconds
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 9c789668-cfb5-497f-91f1-5d6807315926 (Updating nfs.cephfs deployment (+3 -> 3))
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.qiwwqr
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.qiwwqr
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qiwwqr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qiwwqr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qiwwqr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.qiwwqr-rgw
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.qiwwqr-rgw
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qiwwqr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qiwwqr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qiwwqr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.qiwwqr's ganesha conf is defaulting to empty
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.qiwwqr's ganesha conf is defaulting to empty
Dec 05 09:48:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:48:33 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.qiwwqr on compute-1
Dec 05 09:48:33 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.qiwwqr on compute-1
Dec 05 09:48:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:34 compute-0 ceph-mon[74418]: Health check update: 2 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:34 compute-0 ceph-mon[74418]: Creating key for client.nfs.cephfs.0.0.compute-1.qiwwqr
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qiwwqr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qiwwqr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 05 09:48:34 compute-0 ceph-mon[74418]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qiwwqr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qiwwqr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:48:34 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e8 new map
Dec 05 09:48:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2025-12-05T09:48:34:317027+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-05T09:47:51.448919+0000
                                           modified        2025-12-05T09:48:19.295271+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24184}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24184 members: 24184
                                           [mds.cephfs.compute-2.qyxerc{0:24184} state up:active seq 2 addr [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.hfgtsk{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/274001102,v1:192.168.122.100:6807/274001102] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.hxfsnw{-1:24215} state up:standby seq 1 addr [v2:192.168.122.101:6804/2817965964,v1:192.168.122.101:6805/2817965964] compat {c=[1],r=[1],i=[1fff]}]
Dec 05 09:48:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2817965964,v1:192.168.122.101:6805/2817965964] up:boot
Dec 05 09:48:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 2 up:standby
Dec 05 09:48:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.hxfsnw"} v 0)
Dec 05 09:48:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hxfsnw"}]: dispatch
Dec 05 09:48:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e8 all = 0
Dec 05 09:48:35 compute-0 sudo[96670]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnfmnrvgdqmmfyatwbmwfemzpgyoewdj ; /usr/bin/python3'
Dec 05 09:48:35 compute-0 sudo[96670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:35 compute-0 python3[96672]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:35 compute-0 podman[96673]: 2025-12-05 09:48:35.307644917 +0000 UTC m=+0.042797046 container create f38eb2bd5fee4666f442ad806eaa28e4238b5c45a5a7cbae461d136a09172a11 (image=quay.io/ceph/ceph:v19, name=charming_mclean, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:35 compute-0 systemd[1]: Started libpod-conmon-f38eb2bd5fee4666f442ad806eaa28e4238b5c45a5a7cbae461d136a09172a11.scope.
Dec 05 09:48:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36c91b6f01608922bb9f395667acf828ebfc20d28d3a5cdca0e77510de4e85be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36c91b6f01608922bb9f395667acf828ebfc20d28d3a5cdca0e77510de4e85be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:35 compute-0 podman[96673]: 2025-12-05 09:48:35.377351184 +0000 UTC m=+0.112503353 container init f38eb2bd5fee4666f442ad806eaa28e4238b5c45a5a7cbae461d136a09172a11 (image=quay.io/ceph/ceph:v19, name=charming_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:48:35 compute-0 podman[96673]: 2025-12-05 09:48:35.38383928 +0000 UTC m=+0.118991419 container start f38eb2bd5fee4666f442ad806eaa28e4238b5c45a5a7cbae461d136a09172a11 (image=quay.io/ceph/ceph:v19, name=charming_mclean, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:48:35 compute-0 podman[96673]: 2025-12-05 09:48:35.289426191 +0000 UTC m=+0.024578340 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:35 compute-0 podman[96673]: 2025-12-05 09:48:35.39044669 +0000 UTC m=+0.125598839 container attach f38eb2bd5fee4666f442ad806eaa28e4238b5c45a5a7cbae461d136a09172a11 (image=quay.io/ceph/ceph:v19, name=charming_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:48:35 compute-0 ceph-mon[74418]: Rados config object exists: conf-nfs.cephfs
Dec 05 09:48:35 compute-0 ceph-mon[74418]: Creating key for client.nfs.cephfs.0.0.compute-1.qiwwqr-rgw
Dec 05 09:48:35 compute-0 ceph-mon[74418]: Bind address in nfs.cephfs.0.0.compute-1.qiwwqr's ganesha conf is defaulting to empty
Dec 05 09:48:35 compute-0 ceph-mon[74418]: Deploying daemon nfs.cephfs.0.0.compute-1.qiwwqr on compute-1
Dec 05 09:48:35 compute-0 ceph-mon[74418]: pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 05 09:48:35 compute-0 ceph-mon[74418]: mds.? [v2:192.168.122.101:6804/2817965964,v1:192.168.122.101:6805/2817965964] up:boot
Dec 05 09:48:35 compute-0 ceph-mon[74418]: fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 2 up:standby
Dec 05 09:48:35 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hxfsnw"}]: dispatch
Dec 05 09:48:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e9 new map
Dec 05 09:48:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2025-12-05T09:48:35:488493+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-05T09:47:51.448919+0000
                                           modified        2025-12-05T09:48:34.492000+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24184}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24184 members: 24184
                                           [mds.cephfs.compute-2.qyxerc{0:24184} state up:active seq 6 join_fscid=1 addr [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.hfgtsk{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/274001102,v1:192.168.122.100:6807/274001102] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.hxfsnw{-1:24215} state up:standby seq 1 addr [v2:192.168.122.101:6804/2817965964,v1:192.168.122.101:6805/2817965964] compat {c=[1],r=[1],i=[1fff]}]
Dec 05 09:48:35 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] up:active
Dec 05 09:48:35 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 2 up:standby
Dec 05 09:48:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:48:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:48:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:48:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:35 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.lttwli
Dec 05 09:48:35 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.lttwli
Dec 05 09:48:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.lttwli", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 05 09:48:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.lttwli", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 05 09:48:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.lttwli", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 05 09:48:35 compute-0 ceph-mgr[74711]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 05 09:48:35 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 05 09:48:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 05 09:48:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 05 09:48:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec 05 09:48:35 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432879579' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 05 09:48:35 compute-0 charming_mclean[96688]: mimic
Dec 05 09:48:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 05 09:48:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:48:35 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:35 compute-0 systemd[1]: libpod-f38eb2bd5fee4666f442ad806eaa28e4238b5c45a5a7cbae461d136a09172a11.scope: Deactivated successfully.
Dec 05 09:48:35 compute-0 conmon[96688]: conmon f38eb2bd5fee4666f442 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f38eb2bd5fee4666f442ad806eaa28e4238b5c45a5a7cbae461d136a09172a11.scope/container/memory.events
Dec 05 09:48:35 compute-0 podman[96673]: 2025-12-05 09:48:35.791046891 +0000 UTC m=+0.526199020 container died f38eb2bd5fee4666f442ad806eaa28e4238b5c45a5a7cbae461d136a09172a11 (image=quay.io/ceph/ceph:v19, name=charming_mclean, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 09:48:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-36c91b6f01608922bb9f395667acf828ebfc20d28d3a5cdca0e77510de4e85be-merged.mount: Deactivated successfully.
Dec 05 09:48:35 compute-0 podman[96673]: 2025-12-05 09:48:35.836610361 +0000 UTC m=+0.571762490 container remove f38eb2bd5fee4666f442ad806eaa28e4238b5c45a5a7cbae461d136a09172a11 (image=quay.io/ceph/ceph:v19, name=charming_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 09:48:35 compute-0 systemd[1]: libpod-conmon-f38eb2bd5fee4666f442ad806eaa28e4238b5c45a5a7cbae461d136a09172a11.scope: Deactivated successfully.
Dec 05 09:48:35 compute-0 sudo[96670]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 767 B/s wr, 1 op/s
Dec 05 09:48:36 compute-0 ceph-mon[74418]: mds.? [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] up:active
Dec 05 09:48:36 compute-0 ceph-mon[74418]: fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 2 up:standby
Dec 05 09:48:36 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:36 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:36 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:36 compute-0 ceph-mon[74418]: Creating key for client.nfs.cephfs.1.0.compute-2.lttwli
Dec 05 09:48:36 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.lttwli", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 05 09:48:36 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.lttwli", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 05 09:48:36 compute-0 ceph-mon[74418]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 05 09:48:36 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 05 09:48:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1432879579' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 05 09:48:36 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 05 09:48:36 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:36 compute-0 ceph-mon[74418]: pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 767 B/s wr, 1 op/s
Dec 05 09:48:36 compute-0 sudo[96762]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoclegsqulznttjdbpahnrcmpcaiqkpm ; /usr/bin/python3'
Dec 05 09:48:36 compute-0 sudo[96762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:36 compute-0 python3[96764]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:48:37 compute-0 podman[96765]: 2025-12-05 09:48:36.980334663 +0000 UTC m=+0.021218119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:48:37 compute-0 podman[96765]: 2025-12-05 09:48:37.450160548 +0000 UTC m=+0.491043994 container create 8a0fbc13749c0be7916dda54faa300a0214faa01cefc3cb619d022de02b34fdd (image=quay.io/ceph/ceph:v19, name=elated_meitner, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:48:37 compute-0 systemd[1]: Started libpod-conmon-8a0fbc13749c0be7916dda54faa300a0214faa01cefc3cb619d022de02b34fdd.scope.
Dec 05 09:48:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f21b1515d1ac9dfcd435a18c79f1f7e540f68340ad11d5ac6548f92b1fc6ff3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f21b1515d1ac9dfcd435a18c79f1f7e540f68340ad11d5ac6548f92b1fc6ff3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:37 compute-0 podman[96765]: 2025-12-05 09:48:37.54286588 +0000 UTC m=+0.583749346 container init 8a0fbc13749c0be7916dda54faa300a0214faa01cefc3cb619d022de02b34fdd (image=quay.io/ceph/ceph:v19, name=elated_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 09:48:37 compute-0 podman[96765]: 2025-12-05 09:48:37.548707089 +0000 UTC m=+0.589590535 container start 8a0fbc13749c0be7916dda54faa300a0214faa01cefc3cb619d022de02b34fdd (image=quay.io/ceph/ceph:v19, name=elated_meitner, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 09:48:37 compute-0 podman[96765]: 2025-12-05 09:48:37.552725047 +0000 UTC m=+0.593608493 container attach 8a0fbc13749c0be7916dda54faa300a0214faa01cefc3cb619d022de02b34fdd (image=quay.io/ceph/ceph:v19, name=elated_meitner, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:48:37 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 15 completed events
Dec 05 09:48:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:48:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec 05 09:48:37 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058138728' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 05 09:48:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e10 new map
Dec 05 09:48:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           btime 2025-12-05T09:48:37:785966+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-05T09:47:51.448919+0000
                                           modified        2025-12-05T09:48:34.492000+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24184}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24184 members: 24184
                                           [mds.cephfs.compute-2.qyxerc{0:24184} state up:active seq 6 join_fscid=1 addr [v2:192.168.122.102:6804/3967900679,v1:192.168.122.102:6805/3967900679] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.hfgtsk{-1:14565} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.100:6806/274001102,v1:192.168.122.100:6807/274001102] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.hxfsnw{-1:24215} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2817965964,v1:192.168.122.101:6805/2817965964] compat {c=[1],r=[1],i=[1fff]}]
Dec 05 09:48:37 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Updating MDS map to version 10 from mon.0
Dec 05 09:48:37 compute-0 elated_meitner[96781]: 
Dec 05 09:48:37 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/274001102,v1:192.168.122.100:6807/274001102] up:standby
Dec 05 09:48:37 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2817965964,v1:192.168.122.101:6805/2817965964] up:standby
Dec 05 09:48:37 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 2 up:standby
Dec 05 09:48:37 compute-0 elated_meitner[96781]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Dec 05 09:48:37 compute-0 systemd[1]: libpod-8a0fbc13749c0be7916dda54faa300a0214faa01cefc3cb619d022de02b34fdd.scope: Deactivated successfully.
Dec 05 09:48:37 compute-0 podman[96765]: 2025-12-05 09:48:37.981161726 +0000 UTC m=+1.022045172 container died 8a0fbc13749c0be7916dda54faa300a0214faa01cefc3cb619d022de02b34fdd (image=quay.io/ceph/ceph:v19, name=elated_meitner, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:48:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f21b1515d1ac9dfcd435a18c79f1f7e540f68340ad11d5ac6548f92b1fc6ff3-merged.mount: Deactivated successfully.
Dec 05 09:48:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1058138728' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 05 09:48:38 compute-0 ceph-mon[74418]: mds.? [v2:192.168.122.100:6806/274001102,v1:192.168.122.100:6807/274001102] up:standby
Dec 05 09:48:38 compute-0 ceph-mon[74418]: mds.? [v2:192.168.122.101:6804/2817965964,v1:192.168.122.101:6805/2817965964] up:standby
Dec 05 09:48:38 compute-0 ceph-mon[74418]: fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 2 up:standby
Dec 05 09:48:38 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:38 compute-0 podman[96765]: 2025-12-05 09:48:38.036398119 +0000 UTC m=+1.077281565 container remove 8a0fbc13749c0be7916dda54faa300a0214faa01cefc3cb619d022de02b34fdd (image=quay.io/ceph/ceph:v19, name=elated_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:48:38 compute-0 systemd[1]: libpod-conmon-8a0fbc13749c0be7916dda54faa300a0214faa01cefc3cb619d022de02b34fdd.scope: Deactivated successfully.
Dec 05 09:48:38 compute-0 sudo[96762]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 767 B/s wr, 1 op/s
Dec 05 09:48:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 05 09:48:38 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 05 09:48:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 05 09:48:39 compute-0 ceph-mon[74418]: pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 767 B/s wr, 1 op/s
Dec 05 09:48:39 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 05 09:48:40 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 05 09:48:40 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 05 09:48:40 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.lttwli-rgw
Dec 05 09:48:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.lttwli-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 05 09:48:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.lttwli-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:48:40 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.lttwli-rgw
Dec 05 09:48:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.lttwli-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:48:40 compute-0 ceph-mgr[74711]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.lttwli's ganesha conf is defaulting to empty
Dec 05 09:48:40 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.lttwli's ganesha conf is defaulting to empty
Dec 05 09:48:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:48:40 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:40 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.lttwli on compute-2
Dec 05 09:48:40 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.lttwli on compute-2
Dec 05 09:48:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 767 B/s wr, 1 op/s
Dec 05 09:48:40 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 05 09:48:40 compute-0 ceph-mon[74418]: Rados config object exists: conf-nfs.cephfs
Dec 05 09:48:40 compute-0 ceph-mon[74418]: Creating key for client.nfs.cephfs.1.0.compute-2.lttwli-rgw
Dec 05 09:48:40 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.lttwli-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:48:40 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.lttwli-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:48:40 compute-0 ceph-mon[74418]: Bind address in nfs.cephfs.1.0.compute-2.lttwli's ganesha conf is defaulting to empty
Dec 05 09:48:40 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:40 compute-0 ceph-mon[74418]: Deploying daemon nfs.cephfs.1.0.compute-2.lttwli on compute-2
Dec 05 09:48:40 compute-0 ceph-mon[74418]: pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 767 B/s wr, 1 op/s
Dec 05 09:48:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:48:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 767 B/s wr, 2 op/s
Dec 05 09:48:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:48:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:48:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:48:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:42 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.hocvro
Dec 05 09:48:42 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.hocvro
Dec 05 09:48:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hocvro", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 05 09:48:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hocvro", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 05 09:48:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hocvro", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 05 09:48:42 compute-0 ceph-mgr[74711]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 05 09:48:42 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 05 09:48:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 05 09:48:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 05 09:48:42 compute-0 ceph-mon[74418]: pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 767 B/s wr, 2 op/s
Dec 05 09:48:42 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:42 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:42 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:42 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hocvro", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 05 09:48:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 05 09:48:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:48:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 767 B/s wr, 1 op/s
Dec 05 09:48:45 compute-0 ceph-mon[74418]: Creating key for client.nfs.cephfs.2.0.compute-0.hocvro
Dec 05 09:48:45 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hocvro", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 05 09:48:45 compute-0 ceph-mon[74418]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 05 09:48:45 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 05 09:48:45 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 05 09:48:45 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 05 09:48:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 05 09:48:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 05 09:48:45 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 05 09:48:45 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 05 09:48:45 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.hocvro-rgw
Dec 05 09:48:45 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.hocvro-rgw
Dec 05 09:48:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hocvro-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 05 09:48:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hocvro-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:48:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hocvro-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:48:45 compute-0 ceph-mgr[74711]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.hocvro's ganesha conf is defaulting to empty
Dec 05 09:48:45 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.hocvro's ganesha conf is defaulting to empty
Dec 05 09:48:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:48:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:45 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.hocvro on compute-0
Dec 05 09:48:45 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.hocvro on compute-0
Dec 05 09:48:46 compute-0 sudo[96875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:48:46 compute-0 sudo[96875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:46 compute-0 sudo[96875]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:46 compute-0 ceph-mon[74418]: pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 767 B/s wr, 1 op/s
Dec 05 09:48:46 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 05 09:48:46 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 05 09:48:46 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hocvro-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:48:46 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hocvro-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 09:48:46 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:48:46 compute-0 sudo[96900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:48:46
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'images', '.mgr', 'backups']
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:48:46 compute-0 sudo[96900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Dec 05 09:48:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec 05 09:48:46 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:48:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:48:46 compute-0 podman[96965]: 2025-12-05 09:48:46.509719624 +0000 UTC m=+0.042212780 container create 07f38e4196e80dfd455d5c3b9c78ce225b958e619881e9899bc16a8d2a5cf778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 05 09:48:46 compute-0 systemd[1]: Started libpod-conmon-07f38e4196e80dfd455d5c3b9c78ce225b958e619881e9899bc16a8d2a5cf778.scope.
Dec 05 09:48:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:48:46 compute-0 podman[96965]: 2025-12-05 09:48:46.581410955 +0000 UTC m=+0.113904131 container init 07f38e4196e80dfd455d5c3b9c78ce225b958e619881e9899bc16a8d2a5cf778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bohr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:48:46 compute-0 podman[96965]: 2025-12-05 09:48:46.587008587 +0000 UTC m=+0.119501743 container start 07f38e4196e80dfd455d5c3b9c78ce225b958e619881e9899bc16a8d2a5cf778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bohr, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:48:46 compute-0 podman[96965]: 2025-12-05 09:48:46.493618486 +0000 UTC m=+0.026111662 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:46 compute-0 awesome_bohr[96981]: 167 167
Dec 05 09:48:46 compute-0 podman[96965]: 2025-12-05 09:48:46.59044945 +0000 UTC m=+0.122942616 container attach 07f38e4196e80dfd455d5c3b9c78ce225b958e619881e9899bc16a8d2a5cf778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bohr, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:48:46 compute-0 systemd[1]: libpod-07f38e4196e80dfd455d5c3b9c78ce225b958e619881e9899bc16a8d2a5cf778.scope: Deactivated successfully.
Dec 05 09:48:46 compute-0 podman[96965]: 2025-12-05 09:48:46.591379186 +0000 UTC m=+0.123872372 container died 07f38e4196e80dfd455d5c3b9c78ce225b958e619881e9899bc16a8d2a5cf778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bohr, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 09:48:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-12e5c172ea53b59db23d0a7aba77a7b8aae3d6797e4dffbbb4fa3f94b6138ef4-merged.mount: Deactivated successfully.
Dec 05 09:48:46 compute-0 podman[96965]: 2025-12-05 09:48:46.631840407 +0000 UTC m=+0.164333563 container remove 07f38e4196e80dfd455d5c3b9c78ce225b958e619881e9899bc16a8d2a5cf778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec 05 09:48:46 compute-0 systemd[1]: libpod-conmon-07f38e4196e80dfd455d5c3b9c78ce225b958e619881e9899bc16a8d2a5cf778.scope: Deactivated successfully.
Dec 05 09:48:46 compute-0 systemd[1]: Reloading.
Dec 05 09:48:46 compute-0 systemd-rc-local-generator[97024]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:48:46 compute-0 systemd-sysv-generator[97027]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:48:46 compute-0 systemd[1]: Reloading.
Dec 05 09:48:47 compute-0 systemd-sysv-generator[97068]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:48:47 compute-0 systemd-rc-local-generator[97063]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:48:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 05 09:48:47 compute-0 ceph-mon[74418]: Rados config object exists: conf-nfs.cephfs
Dec 05 09:48:47 compute-0 ceph-mon[74418]: Creating key for client.nfs.cephfs.2.0.compute-0.hocvro-rgw
Dec 05 09:48:47 compute-0 ceph-mon[74418]: Bind address in nfs.cephfs.2.0.compute-0.hocvro's ganesha conf is defaulting to empty
Dec 05 09:48:47 compute-0 ceph-mon[74418]: Deploying daemon nfs.cephfs.2.0.compute-0.hocvro on compute-0
Dec 05 09:48:47 compute-0 ceph-mon[74418]: pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Dec 05 09:48:47 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:48:47 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:48:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 05 09:48:47 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 05 09:48:47 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 118929c0-955d-4df8-a180-01eb9b819ecf (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 05 09:48:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec 05 09:48:47 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:48:47 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:48:47 compute-0 podman[97120]: 2025-12-05 09:48:47.411117252 +0000 UTC m=+0.034995303 container create d1ea233284d0d310cc076ca9ad62473a1bc421943ae196b1f9584786262f3156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bfa4c763b8583ae6b894ae1e62989a631d6d04fe9261ab88f6f47e59639de7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bfa4c763b8583ae6b894ae1e62989a631d6d04fe9261ab88f6f47e59639de7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bfa4c763b8583ae6b894ae1e62989a631d6d04fe9261ab88f6f47e59639de7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bfa4c763b8583ae6b894ae1e62989a631d6d04fe9261ab88f6f47e59639de7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hocvro-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:47 compute-0 podman[97120]: 2025-12-05 09:48:47.463006725 +0000 UTC m=+0.086884806 container init d1ea233284d0d310cc076ca9ad62473a1bc421943ae196b1f9584786262f3156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 09:48:47 compute-0 podman[97120]: 2025-12-05 09:48:47.467658371 +0000 UTC m=+0.091536422 container start d1ea233284d0d310cc076ca9ad62473a1bc421943ae196b1f9584786262f3156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:48:47 compute-0 bash[97120]: d1ea233284d0d310cc076ca9ad62473a1bc421943ae196b1f9584786262f3156
Dec 05 09:48:47 compute-0 podman[97120]: 2025-12-05 09:48:47.396200526 +0000 UTC m=+0.020078597 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:48:47 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:48:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:47 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 05 09:48:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:47 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 05 09:48:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:48:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:47 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 05 09:48:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:47 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 05 09:48:47 compute-0 sudo[96900]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:47 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 05 09:48:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:47 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 05 09:48:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:48:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:47 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 05 09:48:47 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:48:47 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:48:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:47 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:48:47 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:47 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 9c789668-cfb5-497f-91f1-5d6807315926 (Updating nfs.cephfs deployment (+3 -> 3))
Dec 05 09:48:47 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 9c789668-cfb5-497f-91f1-5d6807315926 (Updating nfs.cephfs deployment (+3 -> 3)) in 14 seconds
Dec 05 09:48:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:48:47 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:47 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev c7aa2a0f-8751-44fe-b93f-5f117475e4b1 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec 05 09:48:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Dec 05 09:48:47 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:47 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.sldaaq on compute-1
Dec 05 09:48:47 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.sldaaq on compute-1
Dec 05 09:48:47 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 16 completed events
Dec 05 09:48:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:48:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 05 09:48:48 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:48:48 compute-0 ceph-mon[74418]: osdmap e53: 3 total, 3 up, 3 in
Dec 05 09:48:48 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:48:48 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:48 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:48 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:48 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:48 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:48 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:48:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 05 09:48:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 09:48:48 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 05 09:48:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec 05 09:48:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:48:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec 05 09:48:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:48:48 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev c6144b4c-4516-4a55-bb2f-9d1f28fd265e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 05 09:48:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec 05 09:48:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 05 09:48:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:48:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 09:48:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 05 09:48:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:48:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:48:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:48:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 05 09:48:49 compute-0 ceph-mon[74418]: Deploying daemon haproxy.nfs.cephfs.compute-1.sldaaq on compute-1
Dec 05 09:48:49 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:48:49 compute-0 ceph-mon[74418]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 09:48:49 compute-0 ceph-mon[74418]: osdmap e54: 3 total, 3 up, 3 in
Dec 05 09:48:49 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:48:49 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:48:49 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:48:49 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 05 09:48:49 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 5460c29d-b254-450c-9920-9d0fe2330c2c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 05 09:48:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec 05 09:48:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:48:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v40: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Dec 05 09:48:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec 05 09:48:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:48:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 05 09:48:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:48:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:48:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 05 09:48:50 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 05 09:48:50 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 3affa59e-9612-49cc-a3e3-848f04c8d9a3 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 05 09:48:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Dec 05 09:48:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:48:50 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:48:50 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:48:50 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:48:50 compute-0 ceph-mon[74418]: osdmap e55: 3 total, 3 up, 3 in
Dec 05 09:48:50 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:48:50 compute-0 ceph-mon[74418]: pgmap v40: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Dec 05 09:48:50 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 55 pg[8.0( v 39'6 (0'0,39'6] local-lis/les=38/39 n=6 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=55 pruub=15.487421036s) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 39'5 mlcod 39'5 active pruub 208.569259644s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 55 pg[9.0( v 52'1029 (0'0,52'1029] local-lis/les=40/41 n=178 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=55 pruub=9.473465919s) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 52'1028 mlcod 52'1028 active pruub 202.555511475s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.0( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=55 pruub=15.487421036s) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 39'5 mlcod 0'0 unknown pruub 208.569259644s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x563f24c64900) operator()   moving buffer(0x563f24953c48 space 0x563f2486a830 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x563f24c64900) operator()   moving buffer(0x563f24929608 space 0x563f247a3ae0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x563f24c64900) operator()   moving buffer(0x563f24913928 space 0x563f2486b120 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x563f24c64900) operator()   moving buffer(0x563f24912ac8 space 0x563f2480a690 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.5( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.9( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.d( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.14( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.12( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.13( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.2( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.4( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.6( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.1( v 39'6 (0'0,39'6] local-lis/les=38/39 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.16( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.c( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.8( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.1a( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.1c( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.3( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.18( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.f( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.15( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.19( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.a( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.7( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.b( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.10( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.17( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.1d( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.1b( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.1f( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.11( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.e( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[8.1e( v 39'6 lc 0'0 (0'0,39'6] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.0( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=55 pruub=9.473465919s) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 52'1028 mlcod 0'0 unknown pruub 202.555511475s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f249391a8 space 0x563f2486af80 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24921388 space 0x563f2486aeb0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24920708 space 0x563f2486ad10 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f248ff108 space 0x563f247a3940 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24906988 space 0x563f2486b870 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24938708 space 0x563f2486a010 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24685a68 space 0x563f247a36d0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f2486fce8 space 0x563f24739ef0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f2441a708 space 0x563f24954eb0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24938de8 space 0x563f24800de0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f249203e8 space 0x563f24739390 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f249060c8 space 0x563f2486b390 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24928d48 space 0x563f2486b050 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f248feb68 space 0x563f2486b7a0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f23c2a0c8 space 0x563f2486a5c0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f249299c8 space 0x563f2486b600 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24921248 space 0x563f247a3a10 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f2486f4c8 space 0x563f2480a350 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f248fefc8 space 0x563f2486b6d0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24938168 space 0x563f24772280 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24921928 space 0x563f2486a280 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f249382a8 space 0x563f2486ac40 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24921108 space 0x563f2486b530 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24907748 space 0x563f2486a4f0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24685248 space 0x563f247221b0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f248e1748 space 0x563f2442ceb0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f2486f6a8 space 0x563f24738de0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24920f28 space 0x563f247a3600 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f2486e0c8 space 0x563f2486a9d0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f24906e88 space 0x563f2486ade0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563f24015680) operator()   moving buffer(0x563f2486e208 space 0x563f2486a0e0 0x0~1000 clean)
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.14( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.3( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.f( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.4( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.9( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.d( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.2( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.b( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.13( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.6( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.11( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.15( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.1( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.a( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.7( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.e( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.8( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.c( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.5( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.10( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.12( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.16( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.17( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.18( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.19( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.1b( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.1c( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.1d( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.1e( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.1f( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:50 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 56 pg[9.1a( v 52'1029 lc 0'0 (0'0,52'1029] local-lis/les=40/41 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 05 09:48:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:48:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 05 09:48:51 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 05 09:48:51 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev aeff483c-179d-4171-9015-6b72a2f779e7 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec 05 09:48:51 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 118929c0-955d-4df8-a180-01eb9b819ecf (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 05 09:48:51 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 118929c0-955d-4df8-a180-01eb9b819ecf (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Dec 05 09:48:51 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev c6144b4c-4516-4a55-bb2f-9d1f28fd265e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 05 09:48:51 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event c6144b4c-4516-4a55-bb2f-9d1f28fd265e (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Dec 05 09:48:51 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 5460c29d-b254-450c-9920-9d0fe2330c2c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 05 09:48:51 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 5460c29d-b254-450c-9920-9d0fe2330c2c (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Dec 05 09:48:51 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 3affa59e-9612-49cc-a3e3-848f04c8d9a3 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 05 09:48:51 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 3affa59e-9612-49cc-a3e3-848f04c8d9a3 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Dec 05 09:48:51 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev aeff483c-179d-4171-9015-6b72a2f779e7 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec 05 09:48:51 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event aeff483c-179d-4171-9015-6b72a2f779e7 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.14( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:48:51 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:48:51 compute-0 ceph-mon[74418]: osdmap e56: 3 total, 3 up, 3 in
Dec 05 09:48:51 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 09:48:51 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 05 09:48:51 compute-0 ceph-mon[74418]: osdmap e57: 3 total, 3 up, 3 in
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.17( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.15( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.16( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.16( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.17( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.10( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.15( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.11( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.10( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.3( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.11( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.3( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.2( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.e( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.f( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.9( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.b( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.8( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.9( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.8( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.a( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.f( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.d( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.e( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.c( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.d( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.c( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.1( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.b( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.1( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.0( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 52'1028 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.6( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.0( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 39'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.2( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.14( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.7( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.7( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.6( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.a( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.4( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.5( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.5( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.1a( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.1b( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.1b( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.4( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.1a( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.19( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.1e( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.1f( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.1f( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.1c( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.19( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.1e( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.18( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.18( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.1c( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.1d( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.13( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.12( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[8.12( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [1] r=0 lpr=55 pi=[38,55)/1 crt=39'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.1d( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 57 pg[9.13( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [1] r=0 lpr=55 pi=[40,55)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:51 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 05 09:48:52 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 05 09:48:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v43: 291 pgs: 1 peering, 31 unknown, 259 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 3.0 KiB/s wr, 13 op/s
Dec 05 09:48:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec 05 09:48:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:48:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 05 09:48:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:48:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:48:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 05 09:48:52 compute-0 ceph-mon[74418]: pgmap v43: 291 pgs: 1 peering, 31 unknown, 259 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 3.0 KiB/s wr, 13 op/s
Dec 05 09:48:52 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:48:52 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 09:48:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:48:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:48:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 05 09:48:52 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 05 09:48:52 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 58 pg[11.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.289965630s) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active pruub 207.470733643s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:48:52 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 58 pg[11.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.289965630s) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown pruub 207.470733643s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:53 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 21 completed events
Dec 05 09:48:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:48:53 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 05 09:48:53 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 05 09:48:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 05 09:48:54 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 05 09:48:54 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 05 09:48:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v45: 353 pgs: 1 peering, 93 unknown, 259 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 2.6 KiB/s wr, 11 op/s
Dec 05 09:48:55 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 05 09:48:55 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 05 09:48:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:55 compute-0 ceph-mgr[74711]: [progress WARNING root] Starting Global Recovery Event,94 pgs not in active + clean state
Dec 05 09:48:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 05 09:48:55 compute-0 ceph-mon[74418]: 8.14 scrub starts
Dec 05 09:48:55 compute-0 ceph-mon[74418]: 8.14 scrub ok
Dec 05 09:48:55 compute-0 ceph-mon[74418]: 10.1b scrub starts
Dec 05 09:48:55 compute-0 ceph-mon[74418]: 10.1b scrub ok
Dec 05 09:48:55 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:48:55 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 09:48:55 compute-0 ceph-mon[74418]: osdmap e58: 3 total, 3 up, 3 in
Dec 05 09:48:55 compute-0 ceph-mon[74418]: 9.17 scrub starts
Dec 05 09:48:55 compute-0 ceph-mon[74418]: 9.17 scrub ok
Dec 05 09:48:55 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 05 09:48:56 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.15( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.c( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.b( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.9( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.d( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.2( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.6( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.18( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1f( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.10( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.11( empty local-lis/les=44/45 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:48:56 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=58/59 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.c( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.b( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.9( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.15( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.d( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.2( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.6( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.18( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1f( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.10( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 59 pg[11.11( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:48:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 32 peering, 31 unknown, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 2.1 KiB/s wr, 9 op/s
Dec 05 09:48:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 05 09:48:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 05 09:48:56 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 05 09:48:56 compute-0 ceph-mon[74418]: 10.12 scrub starts
Dec 05 09:48:56 compute-0 ceph-mon[74418]: 8.15 scrub starts
Dec 05 09:48:56 compute-0 ceph-mon[74418]: 8.15 scrub ok
Dec 05 09:48:56 compute-0 ceph-mon[74418]: pgmap v45: 353 pgs: 1 peering, 93 unknown, 259 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 2.6 KiB/s wr, 11 op/s
Dec 05 09:48:56 compute-0 ceph-mon[74418]: 8.16 scrub starts
Dec 05 09:48:56 compute-0 ceph-mon[74418]: 8.16 scrub ok
Dec 05 09:48:56 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:48:56 compute-0 ceph-mon[74418]: 10.12 scrub ok
Dec 05 09:48:56 compute-0 ceph-mon[74418]: 10.11 scrub starts
Dec 05 09:48:56 compute-0 ceph-mon[74418]: 10.11 scrub ok
Dec 05 09:48:56 compute-0 ceph-mon[74418]: osdmap e59: 3 total, 3 up, 3 in
Dec 05 09:48:56 compute-0 ceph-mon[74418]: 9.16 scrub starts
Dec 05 09:48:56 compute-0 ceph-mon[74418]: 9.16 scrub ok
Dec 05 09:48:56 compute-0 ceph-mon[74418]: pgmap v47: 353 pgs: 32 peering, 31 unknown, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 2.1 KiB/s wr, 9 op/s
Dec 05 09:48:56 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 05 09:48:56 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 05 09:48:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:48:57 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Dec 05 09:48:57 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Dec 05 09:48:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 32 peering, 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:48:58 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 05 09:48:59 compute-0 ceph-mon[74418]: 10.10 scrub starts
Dec 05 09:48:59 compute-0 ceph-mon[74418]: 10.10 scrub ok
Dec 05 09:48:59 compute-0 ceph-mon[74418]: osdmap e60: 3 total, 3 up, 3 in
Dec 05 09:48:59 compute-0 ceph-mon[74418]: 8.17 scrub starts
Dec 05 09:48:59 compute-0 ceph-mon[74418]: 8.17 scrub ok
Dec 05 09:48:59 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 05 09:48:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:48:59 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec 05 09:48:59 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec 05 09:49:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 32 peering, 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:49:00 compute-0 ceph-mon[74418]: 10.7 scrub starts
Dec 05 09:49:00 compute-0 ceph-mon[74418]: 10.7 scrub ok
Dec 05 09:49:00 compute-0 ceph-mon[74418]: 8.10 deep-scrub starts
Dec 05 09:49:00 compute-0 ceph-mon[74418]: 8.10 deep-scrub ok
Dec 05 09:49:00 compute-0 ceph-mon[74418]: pgmap v49: 353 pgs: 32 peering, 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:00 compute-0 ceph-mon[74418]: 10.1c scrub starts
Dec 05 09:49:00 compute-0 ceph-mon[74418]: 9.10 scrub starts
Dec 05 09:49:00 compute-0 ceph-mon[74418]: 10.1c scrub ok
Dec 05 09:49:00 compute-0 ceph-mon[74418]: 9.10 scrub ok
Dec 05 09:49:00 compute-0 ceph-mon[74418]: 10.1f scrub starts
Dec 05 09:49:00 compute-0 ceph-mon[74418]: 10.1f scrub ok
Dec 05 09:49:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 05 09:49:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:00 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.ijjpxl on compute-0
Dec 05 09:49:00 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.ijjpxl on compute-0
Dec 05 09:49:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:00 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:00 compute-0 sudo[97190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:49:00 compute-0 sudo[97190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:49:00 compute-0 sudo[97190]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:00 compute-0 sudo[97219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:49:00 compute-0 sudo[97219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:49:00 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 05 09:49:00 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 05 09:49:01 compute-0 ceph-mon[74418]: 8.3 scrub starts
Dec 05 09:49:01 compute-0 ceph-mon[74418]: 8.3 scrub ok
Dec 05 09:49:01 compute-0 ceph-mon[74418]: pgmap v50: 353 pgs: 32 peering, 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:01 compute-0 ceph-mon[74418]: 10.1d deep-scrub starts
Dec 05 09:49:01 compute-0 ceph-mon[74418]: 10.1d deep-scrub ok
Dec 05 09:49:01 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:01 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:01 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:01 compute-0 ceph-mon[74418]: Deploying daemon haproxy.nfs.cephfs.compute-0.ijjpxl on compute-0
Dec 05 09:49:01 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 05 09:49:01 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 05 09:49:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 05 09:49:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:49:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 05 09:49:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:49:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 05 09:49:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:49:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 05 09:49:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 05 09:49:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 05 09:49:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:49:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 05 09:49:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:49:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:49:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:49:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 05 09:49:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:49:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 05 09:49:02 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 05 09:49:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:49:02 compute-0 ceph-mon[74418]: 8.11 scrub starts
Dec 05 09:49:02 compute-0 ceph-mon[74418]: 8.11 scrub ok
Dec 05 09:49:02 compute-0 ceph-mon[74418]: 10.1a scrub starts
Dec 05 09:49:02 compute-0 ceph-mon[74418]: 10.1a scrub ok
Dec 05 09:49:02 compute-0 ceph-mon[74418]: pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:49:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:49:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:49:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 05 09:49:02 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:49:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:02 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf880016e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[12.19( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[12.1c( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[10.18( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[10.5( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[12.8( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[12.a( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[12.e( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[12.c( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[12.b( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[12.6( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[12.12( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[12.10( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.315078735s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.634170532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.16( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.314550400s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.634140015s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.16( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.781024933s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100631714s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.16( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.780998230s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100631714s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.314350128s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.634155273s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.314336777s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.634155273s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.17( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.780724525s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100646973s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.17( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.780711174s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100646973s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.13( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.314031601s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.634170532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.13( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.314016342s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.634170532s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.10( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.780322075s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100677490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.10( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.780305862s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100677490s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.314141273s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.634643555s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.314126968s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.634643555s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.11( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.780091286s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100738525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.11( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.780042648s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100738525s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.313785553s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.634643555s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.313769341s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.634643555s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.2( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.780269623s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.101287842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.2( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.780254364s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101287842s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.3( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.779598236s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100784302s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.3( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.779581070s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100784302s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.f( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.779412270s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100814819s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.f( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.779397964s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100814819s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.8( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.779248238s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100830078s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.8( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.779233932s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100830078s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.a( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.313074112s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.634826660s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.a( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.313057899s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.634826660s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.9( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.778883934s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100860596s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.9( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.778867722s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100860596s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.a( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.778709412s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100860596s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.a( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.778697968s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100860596s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.312713623s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635040283s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.312703133s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635040283s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.d( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.778450966s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100875854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.d( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.778441429s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100875854s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.312617302s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635147095s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.312608719s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635147095s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.c( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.778239250s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100875854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.c( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.778221130s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100875854s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.312189102s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635040283s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.312170982s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635040283s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.b( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.777904510s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100906372s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.b( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.777893066s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100906372s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.312028885s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635223389s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.312018394s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635223389s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.312079430s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635452271s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.312067032s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635452271s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.311678886s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635238647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.311662674s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635238647s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.6( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.777635574s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.101318359s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.6( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.777626038s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101318359s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.5( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.777537346s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.101333618s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.5( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.777526855s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101333618s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.311341286s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635314941s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.311325073s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635314941s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.4( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.777272224s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.101348877s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.4( v 39'6 (0'0,39'6] local-lis/les=55/57 n=1 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.777260780s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101348877s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.1b( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.777085304s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.101348877s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.1b( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.777071953s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101348877s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.310977936s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635360718s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.310966492s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635360718s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.310783386s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635330200s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.310771942s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635330200s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.19( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.776648521s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.101364136s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.19( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.776636124s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101364136s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.310626984s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635482788s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.310615540s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635482788s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.18( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.776419640s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.101394653s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.18( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.776408195s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101394653s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.310458183s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635528564s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.310447693s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635528564s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.1f( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.776212692s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.101394653s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.1f( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.776202202s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101394653s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.310074806s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635391235s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.310057640s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635391235s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.309971809s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 214.635437012s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.309961319s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.635437012s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.1c( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.775743484s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.101394653s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.1c( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.775730133s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101394653s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.12( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.775535583s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.101425171s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.12( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.775523186s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101425171s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.16( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.314535141s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.634140015s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=58/59 n=0 ec=58/44 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=9.314726830s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.634170532s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.14( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.765980721s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.094070435s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.14( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.765948296s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.094070435s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.15( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.772449493s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 active pruub 218.100601196s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:02 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 61 pg[8.15( v 39'6 (0'0,39'6] local-lis/les=55/57 n=0 ec=55/38 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.772393227s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=39'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100601196s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:02 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 05 09:49:02 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 05 09:49:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 05 09:49:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 05 09:49:03 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[12.19( v 54'44 (0'0,54'44] local-lis/les=61/62 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=54'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[12.1c( v 54'44 (0'0,54'44] local-lis/les=61/62 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=54'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-mon[74418]: 9.2 scrub starts
Dec 05 09:49:03 compute-0 ceph-mon[74418]: 9.2 scrub ok
Dec 05 09:49:03 compute-0 ceph-mon[74418]: 10.6 scrub starts
Dec 05 09:49:03 compute-0 ceph-mon[74418]: 10.6 scrub ok
Dec 05 09:49:03 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:49:03 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:49:03 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:49:03 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 05 09:49:03 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:49:03 compute-0 ceph-mon[74418]: osdmap e61: 3 total, 3 up, 3 in
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[10.5( v 52'48 (0'0,52'48] local-lis/les=61/62 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[10.2( v 52'48 (0'0,52'48] local-lis/les=61/62 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[12.8( v 54'44 (0'0,54'44] local-lis/les=61/62 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=54'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[12.a( v 54'44 (0'0,54'44] local-lis/les=61/62 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=54'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[10.19( v 52'48 (0'0,52'48] local-lis/les=61/62 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[10.8( v 52'48 (0'0,52'48] local-lis/les=61/62 n=1 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[12.e( v 54'44 (0'0,54'44] local-lis/les=61/62 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=54'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[12.c( v 54'44 (0'0,54'44] local-lis/les=61/62 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=54'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[12.b( v 54'44 (0'0,54'44] local-lis/les=61/62 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=54'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[12.6( v 54'44 (0'0,54'44] local-lis/les=61/62 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=54'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[10.13( v 52'48 (0'0,52'48] local-lis/les=61/62 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[12.12( v 54'44 (0'0,54'44] local-lis/les=61/62 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=54'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[10.15( v 58'51 lc 52'37 (0'0,58'51] local-lis/les=61/62 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=58'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[10.1b( v 52'48 (0'0,52'48] local-lis/les=61/62 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[12.10( v 60'47 lc 52'14 (0'0,60'47] local-lis/les=61/62 n=0 ec=58/50 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=60'47 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[10.14( v 58'51 lc 52'45 (0'0,58'51] local-lis/les=61/62 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=58'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:03 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 62 pg[10.18( v 52'48 (0'0,52'48] local-lis/les=61/62 n=0 ec=56/42 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 05 09:49:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 05 09:49:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:04 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 05 09:49:04 compute-0 ceph-mon[74418]: 9.15 scrub starts
Dec 05 09:49:04 compute-0 ceph-mon[74418]: 9.15 scrub ok
Dec 05 09:49:04 compute-0 ceph-mon[74418]: 10.17 scrub starts
Dec 05 09:49:04 compute-0 ceph-mon[74418]: 10.17 scrub ok
Dec 05 09:49:04 compute-0 ceph-mon[74418]: osdmap e62: 3 total, 3 up, 3 in
Dec 05 09:49:04 compute-0 ceph-mon[74418]: pgmap v54: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:04 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 05 09:49:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 05 09:49:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 05 09:49:04 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 05 09:49:04 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 05 09:49:04 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 05 09:49:05 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event ecd8d3dd-5dd3-4ad3-b04d-adbb5d326dbc (Global Recovery Event) in 10 seconds
Dec 05 09:49:05 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 05 09:49:05 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 05 09:49:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 198 B/s, 0 keys/s, 1 objects/s recovering
Dec 05 09:49:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:06 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 05 09:49:06 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 05 09:49:06 compute-0 ceph-mon[74418]: 10.16 scrub starts
Dec 05 09:49:06 compute-0 ceph-mon[74418]: 8.c scrub starts
Dec 05 09:49:06 compute-0 ceph-mon[74418]: 8.c scrub ok
Dec 05 09:49:06 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 05 09:49:06 compute-0 ceph-mon[74418]: osdmap e63: 3 total, 3 up, 3 in
Dec 05 09:49:06 compute-0 ceph-mon[74418]: 10.16 scrub ok
Dec 05 09:49:06 compute-0 ceph-mon[74418]: 10.14 scrub starts
Dec 05 09:49:06 compute-0 ceph-mon[74418]: 10.14 scrub ok
Dec 05 09:49:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 05 09:49:06 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 05 09:49:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 05 09:49:06 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 05 09:49:06 compute-0 podman[97285]: 2025-12-05 09:49:06.832345276 +0000 UTC m=+5.813712387 container create 7eeb7faaf9ad3dae2eec078b03fb1bb3b5c5c3a4a5cfbb551192eda8d0cd76fd (image=quay.io/ceph/haproxy:2.3, name=interesting_burnell)
Dec 05 09:49:06 compute-0 podman[97285]: 2025-12-05 09:49:06.801028264 +0000 UTC m=+5.782395425 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 05 09:49:06 compute-0 systemd[1]: Started libpod-conmon-7eeb7faaf9ad3dae2eec078b03fb1bb3b5c5c3a4a5cfbb551192eda8d0cd76fd.scope.
Dec 05 09:49:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:49:06 compute-0 podman[97285]: 2025-12-05 09:49:06.913833116 +0000 UTC m=+5.895200317 container init 7eeb7faaf9ad3dae2eec078b03fb1bb3b5c5c3a4a5cfbb551192eda8d0cd76fd (image=quay.io/ceph/haproxy:2.3, name=interesting_burnell)
Dec 05 09:49:06 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 05 09:49:06 compute-0 podman[97285]: 2025-12-05 09:49:06.922365657 +0000 UTC m=+5.903732768 container start 7eeb7faaf9ad3dae2eec078b03fb1bb3b5c5c3a4a5cfbb551192eda8d0cd76fd (image=quay.io/ceph/haproxy:2.3, name=interesting_burnell)
Dec 05 09:49:06 compute-0 podman[97285]: 2025-12-05 09:49:06.927696446 +0000 UTC m=+5.909063577 container attach 7eeb7faaf9ad3dae2eec078b03fb1bb3b5c5c3a4a5cfbb551192eda8d0cd76fd (image=quay.io/ceph/haproxy:2.3, name=interesting_burnell)
Dec 05 09:49:06 compute-0 interesting_burnell[97398]: 0 0
Dec 05 09:49:06 compute-0 systemd[1]: libpod-7eeb7faaf9ad3dae2eec078b03fb1bb3b5c5c3a4a5cfbb551192eda8d0cd76fd.scope: Deactivated successfully.
Dec 05 09:49:06 compute-0 conmon[97398]: conmon 7eeb7faaf9ad3dae2eec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7eeb7faaf9ad3dae2eec078b03fb1bb3b5c5c3a4a5cfbb551192eda8d0cd76fd.scope/container/memory.events
Dec 05 09:49:06 compute-0 podman[97285]: 2025-12-05 09:49:06.934353868 +0000 UTC m=+5.915720999 container died 7eeb7faaf9ad3dae2eec078b03fb1bb3b5c5c3a4a5cfbb551192eda8d0cd76fd (image=quay.io/ceph/haproxy:2.3, name=interesting_burnell)
Dec 05 09:49:06 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 05 09:49:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3f6bb3d9bb478ce6a2d3b0fb5c8b7e0f310815b91f440c0a532ce6df0273f90-merged.mount: Deactivated successfully.
Dec 05 09:49:07 compute-0 podman[97285]: 2025-12-05 09:49:07.002555545 +0000 UTC m=+5.983922696 container remove 7eeb7faaf9ad3dae2eec078b03fb1bb3b5c5c3a4a5cfbb551192eda8d0cd76fd (image=quay.io/ceph/haproxy:2.3, name=interesting_burnell)
Dec 05 09:49:07 compute-0 systemd[1]: libpod-conmon-7eeb7faaf9ad3dae2eec078b03fb1bb3b5c5c3a4a5cfbb551192eda8d0cd76fd.scope: Deactivated successfully.
Dec 05 09:49:07 compute-0 systemd[1]: Reloading.
Dec 05 09:49:07 compute-0 systemd-rc-local-generator[97441]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:49:07 compute-0 systemd-sysv-generator[97447]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.17( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=3 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.266788483s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 218.100540161s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.17( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=3 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.266732216s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100540161s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.3( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.266805649s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 218.100799561s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.3( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.266735077s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100799561s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.b( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.266551971s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 218.100845337s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.b( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.266538620s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100845337s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.f( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.266507149s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 218.100875854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.f( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.266487122s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.100875854s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.7( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.265657425s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 218.101303101s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.7( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.265607834s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101303101s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.1b( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.265126228s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 218.101348877s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.1b( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.265105247s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101348877s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.1f( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.264854431s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 218.101379395s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.1f( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.264842033s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101379395s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.13( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.264181137s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 218.101486206s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 64 pg[9.13( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=64 pruub=8.264132500s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.101486206s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:07 compute-0 systemd[1]: Reloading.
Dec 05 09:49:07 compute-0 systemd-sysv-generator[97487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:49:07 compute-0 systemd-rc-local-generator[97483]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:49:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:49:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 05 09:49:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 05 09:49:07 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.f( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.13( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.f( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.13( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.1b( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.1b( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.b( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.b( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.7( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.17( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=3 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.17( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=3 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.3( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.3( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.7( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.1f( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:07 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 65 pg[9.1f( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:07 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.ijjpxl for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 12.15 scrub starts
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 12.15 scrub ok
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 8.1f scrub starts
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 8.1f scrub ok
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 9.14 scrub starts
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 9.14 scrub ok
Dec 05 09:49:07 compute-0 ceph-mon[74418]: pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 198 B/s, 0 keys/s, 1 objects/s recovering
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 10.0 scrub starts
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 11.17 scrub starts
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 11.17 scrub ok
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 10.0 scrub ok
Dec 05 09:49:07 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 05 09:49:07 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 05 09:49:07 compute-0 ceph-mon[74418]: osdmap e64: 3 total, 3 up, 3 in
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 9.11 scrub starts
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 9.11 scrub ok
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 8.1c scrub starts
Dec 05 09:49:07 compute-0 ceph-mon[74418]: 8.1c scrub ok
Dec 05 09:49:07 compute-0 ceph-mon[74418]: osdmap e65: 3 total, 3 up, 3 in
Dec 05 09:49:07 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 05 09:49:07 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 05 09:49:07 compute-0 podman[97542]: 2025-12-05 09:49:07.851085947 +0000 UTC m=+0.022251108 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 05 09:49:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 257 B/s, 0 keys/s, 2 objects/s recovering
Dec 05 09:49:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 05 09:49:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 05 09:49:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 05 09:49:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:08 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c000fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:08 compute-0 podman[97542]: 2025-12-05 09:49:08.571656753 +0000 UTC m=+0.742821894 container create d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 09:49:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 05 09:49:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 05 09:49:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 05 09:49:08 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 66 pg[9.7( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:08 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 66 pg[9.13( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:08 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 66 pg[9.1f( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:08 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 66 pg[9.1b( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:08 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 66 pg[9.17( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=3 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:08 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 66 pg[9.b( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:08 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 66 pg[9.3( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:08 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 66 pg[9.f( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cc8b41eb7f9961ad3dc2d4822b74cf6fa6c736911113985d3f1cd4585ec75e/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:08 compute-0 podman[97542]: 2025-12-05 09:49:08.685011989 +0000 UTC m=+0.856177150 container init d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 09:49:08 compute-0 podman[97542]: 2025-12-05 09:49:08.692915224 +0000 UTC m=+0.864080365 container start d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 09:49:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [NOTICE] 338/094908 (2) : New worker #1 (4) forked
Dec 05 09:49:08 compute-0 bash[97542]: d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b
Dec 05 09:49:08 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.ijjpxl for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:49:08 compute-0 sudo[97219]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:49:08 compute-0 ceph-mon[74418]: 10.e scrub starts
Dec 05 09:49:08 compute-0 ceph-mon[74418]: 10.e scrub ok
Dec 05 09:49:08 compute-0 ceph-mon[74418]: 9.e scrub starts
Dec 05 09:49:08 compute-0 ceph-mon[74418]: 9.e scrub ok
Dec 05 09:49:08 compute-0 ceph-mon[74418]: pgmap v59: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 257 B/s, 0 keys/s, 2 objects/s recovering
Dec 05 09:49:08 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 05 09:49:08 compute-0 ceph-mon[74418]: 12.1d scrub starts
Dec 05 09:49:08 compute-0 ceph-mon[74418]: 12.1d scrub ok
Dec 05 09:49:08 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 05 09:49:08 compute-0 ceph-mon[74418]: osdmap e66: 3 total, 3 up, 3 in
Dec 05 09:49:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:49:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 05 09:49:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:09 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.mdzund on compute-2
Dec 05 09:49:09 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.mdzund on compute-2
Dec 05 09:49:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 05 09:49:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 05 09:49:09 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.7( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.868180275s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 227.174285889s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.1b( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=5 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.871359825s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 227.177474976s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.7( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.868124962s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.174285889s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.1b( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=5 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.871323586s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.177474976s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.f( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.875504494s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 227.181823730s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.b( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.875452042s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 227.181838989s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.b( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.875412941s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.181838989s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.3( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.875265121s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 227.181838989s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.13( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=5 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.867711067s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 227.174362183s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.13( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=5 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.867654800s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.174362183s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.3( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.875105858s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.181838989s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.17( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=3 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.874977112s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 227.181777954s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.17( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=3 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.874940872s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.181777954s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.f( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=6 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.875474930s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.181823730s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.1f( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=5 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.870324135s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 227.177459717s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 67 pg[9.1f( v 52'1029 (0'0,52'1029] local-lis/les=65/66 n=5 ec=55/40 lis/c=65/55 les/c/f=66/57/0 sis=67 pruub=14.870180130s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.177459717s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:09 compute-0 ceph-mon[74418]: 10.c scrub starts
Dec 05 09:49:09 compute-0 ceph-mon[74418]: 10.c scrub ok
Dec 05 09:49:09 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:09 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:09 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:09 compute-0 ceph-mon[74418]: Deploying daemon haproxy.nfs.cephfs.compute-2.mdzund on compute-2
Dec 05 09:49:09 compute-0 ceph-mon[74418]: 12.1e scrub starts
Dec 05 09:49:09 compute-0 ceph-mon[74418]: 12.1e scrub ok
Dec 05 09:49:09 compute-0 ceph-mon[74418]: osdmap e67: 3 total, 3 up, 3 in
Dec 05 09:49:09 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 05 09:49:09 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 05 09:49:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:10 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 8 peering, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 302 B/s, 10 objects/s recovering
Dec 05 09:49:10 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 22 completed events
Dec 05 09:49:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:49:10 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:10 compute-0 ceph-mgr[74711]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Dec 05 09:49:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:10 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf780016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 05 09:49:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 05 09:49:10 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 05 09:49:10 compute-0 ceph-mon[74418]: 10.a scrub starts
Dec 05 09:49:10 compute-0 ceph-mon[74418]: 10.a scrub ok
Dec 05 09:49:10 compute-0 ceph-mon[74418]: 11.15 scrub starts
Dec 05 09:49:10 compute-0 ceph-mon[74418]: 11.15 scrub ok
Dec 05 09:49:10 compute-0 ceph-mon[74418]: pgmap v62: 353 pgs: 8 peering, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 302 B/s, 10 objects/s recovering
Dec 05 09:49:10 compute-0 ceph-mon[74418]: 12.2 scrub starts
Dec 05 09:49:10 compute-0 ceph-mon[74418]: 12.2 scrub ok
Dec 05 09:49:10 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:10 compute-0 ceph-mon[74418]: osdmap e68: 3 total, 3 up, 3 in
Dec 05 09:49:11 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec 05 09:49:11 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec 05 09:49:11 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 05 09:49:11 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 05 09:49:11 compute-0 ceph-mon[74418]: 12.f scrub starts
Dec 05 09:49:11 compute-0 ceph-mon[74418]: 12.f scrub ok
Dec 05 09:49:11 compute-0 ceph-mon[74418]: 11.0 scrub starts
Dec 05 09:49:11 compute-0 ceph-mon[74418]: 11.0 scrub ok
Dec 05 09:49:11 compute-0 ceph-mon[74418]: 10.4 scrub starts
Dec 05 09:49:11 compute-0 ceph-mon[74418]: 10.4 scrub ok
Dec 05 09:49:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:12 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf700016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 8 peering, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 260 B/s, 8 objects/s recovering
Dec 05 09:49:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:49:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:12 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:12 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 05 09:49:12 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 05 09:49:13 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 05 09:49:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:14 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 8 peering, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 201 B/s, 6 objects/s recovering
Dec 05 09:49:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:49:14 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 05 09:49:14 compute-0 ceph-mon[74418]: 10.9 scrub starts
Dec 05 09:49:14 compute-0 ceph-mon[74418]: 10.9 scrub ok
Dec 05 09:49:14 compute-0 ceph-mon[74418]: 11.c scrub starts
Dec 05 09:49:14 compute-0 ceph-mon[74418]: 11.c scrub ok
Dec 05 09:49:14 compute-0 ceph-mon[74418]: pgmap v64: 353 pgs: 8 peering, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 260 B/s, 8 objects/s recovering
Dec 05 09:49:14 compute-0 ceph-mon[74418]: 8.5 scrub starts
Dec 05 09:49:14 compute-0 ceph-mon[74418]: 8.5 scrub ok
Dec 05 09:49:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:49:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 05 09:49:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Dec 05 09:49:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:14 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 05 09:49:14 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 05 09:49:14 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:14 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:14 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:14 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:14 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.ykcbmo on compute-1
Dec 05 09:49:14 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.ykcbmo on compute-1
Dec 05 09:49:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:14 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf780016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:14 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf700016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:14 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.d deep-scrub starts
Dec 05 09:49:14 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.d deep-scrub ok
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 10.d scrub starts
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 10.d scrub ok
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 11.b scrub starts
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 11.b scrub ok
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 10.b scrub starts
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 10.b scrub ok
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 11.19 scrub starts
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 11.19 scrub ok
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 11.9 scrub starts
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 12.5 scrub starts
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 12.5 scrub ok
Dec 05 09:49:15 compute-0 ceph-mon[74418]: pgmap v65: 353 pgs: 8 peering, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 201 B/s, 6 objects/s recovering
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 11.9 scrub ok
Dec 05 09:49:15 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 12.3 scrub starts
Dec 05 09:49:15 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 12.3 scrub ok
Dec 05 09:49:15 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:15 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:15 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:15 compute-0 ceph-mon[74418]: Deploying daemon keepalived.nfs.cephfs.compute-1.ykcbmo on compute-1
Dec 05 09:49:15 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Dec 05 09:49:15 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Dec 05 09:49:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:16 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 159 B/s, 5 objects/s recovering
Dec 05 09:49:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 05 09:49:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 05 09:49:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 05 09:49:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 05 09:49:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 05 09:49:16 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 05 09:49:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:49:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:49:16 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 69 pg[9.15( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=69 pruub=15.326331139s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 234.101379395s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:16 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 69 pg[9.15( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=69 pruub=15.326252937s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.101379395s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:16 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 69 pg[9.d( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=69 pruub=15.325698853s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 234.102584839s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:16 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 69 pg[9.d( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=69 pruub=15.325664520s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.102584839s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:16 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 69 pg[9.5( v 58'1032 (0'0,58'1032] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=69 pruub=15.324644089s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=57'1030 lcod 57'1031 mlcod 57'1031 active pruub 234.102096558s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:16 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 69 pg[9.5( v 58'1032 (0'0,58'1032] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=69 pruub=15.324533463s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=57'1030 lcod 57'1031 mlcod 0'0 unknown NOTIFY pruub 234.102096558s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:16 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 69 pg[9.1d( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=69 pruub=15.324687958s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 234.102600098s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:16 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 69 pg[9.1d( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=69 pruub=15.324637413s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.102600098s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:16 compute-0 ceph-mon[74418]: 11.d deep-scrub starts
Dec 05 09:49:16 compute-0 ceph-mon[74418]: 11.d deep-scrub ok
Dec 05 09:49:16 compute-0 ceph-mon[74418]: 12.d scrub starts
Dec 05 09:49:16 compute-0 ceph-mon[74418]: 12.d scrub ok
Dec 05 09:49:16 compute-0 ceph-mon[74418]: 11.3 scrub starts
Dec 05 09:49:16 compute-0 ceph-mon[74418]: 11.3 scrub ok
Dec 05 09:49:16 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 05 09:49:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:49:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:49:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:49:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:49:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:16 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:16 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:16 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 05 09:49:16 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 05 09:49:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 05 09:49:17 compute-0 ceph-mon[74418]: 8.e deep-scrub starts
Dec 05 09:49:17 compute-0 ceph-mon[74418]: 8.e deep-scrub ok
Dec 05 09:49:17 compute-0 ceph-mon[74418]: 12.0 scrub starts
Dec 05 09:49:17 compute-0 ceph-mon[74418]: 12.0 scrub ok
Dec 05 09:49:17 compute-0 ceph-mon[74418]: pgmap v66: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 159 B/s, 5 objects/s recovering
Dec 05 09:49:17 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 05 09:49:17 compute-0 ceph-mon[74418]: 12.4 deep-scrub starts
Dec 05 09:49:17 compute-0 ceph-mon[74418]: osdmap e69: 3 total, 3 up, 3 in
Dec 05 09:49:17 compute-0 ceph-mon[74418]: 12.4 deep-scrub ok
Dec 05 09:49:17 compute-0 ceph-mon[74418]: 12.1f deep-scrub starts
Dec 05 09:49:17 compute-0 ceph-mon[74418]: 12.1f deep-scrub ok
Dec 05 09:49:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 05 09:49:17 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 05 09:49:17 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 70 pg[9.1d( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:17 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 70 pg[9.d( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:17 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 70 pg[9.d( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:17 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 70 pg[9.15( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:17 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 70 pg[9.15( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:17 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 70 pg[9.1d( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:17 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 70 pg[9.5( v 58'1032 (0'0,58'1032] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=57'1030 lcod 57'1031 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:17 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 70 pg[9.5( v 58'1032 (0'0,58'1032] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=57'1030 lcod 57'1031 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:49:17 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 05 09:49:17 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 05 09:49:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 05 09:49:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 05 09:49:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:18 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 05 09:49:18 compute-0 ceph-mon[74418]: 11.2 scrub starts
Dec 05 09:49:18 compute-0 ceph-mon[74418]: 11.2 scrub ok
Dec 05 09:49:18 compute-0 ceph-mon[74418]: 10.1e deep-scrub starts
Dec 05 09:49:18 compute-0 ceph-mon[74418]: 10.1e deep-scrub ok
Dec 05 09:49:18 compute-0 ceph-mon[74418]: osdmap e70: 3 total, 3 up, 3 in
Dec 05 09:49:18 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 05 09:49:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 05 09:49:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 05 09:49:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.16( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=71 pruub=13.017418861s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 234.101058960s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.16( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=71 pruub=13.017361641s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.101058960s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.e( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=71 pruub=13.017058372s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 234.101364136s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.e( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=71 pruub=13.017001152s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.101364136s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.6( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=71 pruub=13.016756058s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 234.101898193s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.6( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=71 pruub=13.016725540s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.101898193s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.1e( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=71 pruub=13.016904831s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 234.102340698s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.1e( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=71 pruub=13.016869545s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.102340698s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.1d( v 52'1029 (0'0,52'1029] local-lis/les=70/71 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[55,70)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.15( v 52'1029 (0'0,52'1029] local-lis/les=70/71 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[55,70)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.5( v 58'1032 (0'0,58'1032] local-lis/les=70/71 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[55,70)/1 crt=58'1032 lcod 57'1031 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:18 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 71 pg[9.d( v 52'1029 (0'0,52'1029] local-lis/les=70/71 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[55,70)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:18 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:18 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:18 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec 05 09:49:18 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec 05 09:49:19 compute-0 ceph-mon[74418]: 8.1 scrub starts
Dec 05 09:49:19 compute-0 ceph-mon[74418]: 8.1 scrub ok
Dec 05 09:49:19 compute-0 ceph-mon[74418]: 12.1b scrub starts
Dec 05 09:49:19 compute-0 ceph-mon[74418]: 12.1b scrub ok
Dec 05 09:49:19 compute-0 ceph-mon[74418]: pgmap v69: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:19 compute-0 ceph-mon[74418]: 11.8 scrub starts
Dec 05 09:49:19 compute-0 ceph-mon[74418]: 11.8 scrub ok
Dec 05 09:49:19 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 05 09:49:19 compute-0 ceph-mon[74418]: osdmap e71: 3 total, 3 up, 3 in
Dec 05 09:49:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 05 09:49:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 05 09:49:19 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.1e( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.1e( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.16( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.e( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.e( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.6( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.6( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.16( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.15( v 52'1029 (0'0,52'1029] local-lis/les=70/71 n=4 ec=55/40 lis/c=70/55 les/c/f=71/57/0 sis=72 pruub=14.998884201s) [2] async=[2] r=-1 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 237.098739624s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.15( v 52'1029 (0'0,52'1029] local-lis/les=70/71 n=4 ec=55/40 lis/c=70/55 les/c/f=71/57/0 sis=72 pruub=14.998780251s) [2] r=-1 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.098739624s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.d( v 52'1029 (0'0,52'1029] local-lis/les=70/71 n=6 ec=55/40 lis/c=70/55 les/c/f=71/57/0 sis=72 pruub=14.998285294s) [2] async=[2] r=-1 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 237.098861694s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.d( v 52'1029 (0'0,52'1029] local-lis/les=70/71 n=6 ec=55/40 lis/c=70/55 les/c/f=71/57/0 sis=72 pruub=14.998212814s) [2] r=-1 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.098861694s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.5( v 71'1035 (0'0,71'1035] local-lis/les=70/71 n=6 ec=55/40 lis/c=70/55 les/c/f=71/57/0 sis=72 pruub=14.997936249s) [2] async=[2] r=-1 lpr=72 pi=[55,72)/1 crt=58'1032 lcod 71'1034 mlcod 71'1034 active pruub 237.098815918s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.5( v 71'1035 (0'0,71'1035] local-lis/les=70/71 n=6 ec=55/40 lis/c=70/55 les/c/f=71/57/0 sis=72 pruub=14.997854233s) [2] r=-1 lpr=72 pi=[55,72)/1 crt=58'1032 lcod 71'1034 mlcod 0'0 unknown NOTIFY pruub 237.098815918s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.1d( v 52'1029 (0'0,52'1029] local-lis/les=70/71 n=5 ec=55/40 lis/c=70/55 les/c/f=71/57/0 sis=72 pruub=14.994081497s) [2] async=[2] r=-1 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 237.095550537s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:19 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 72 pg[9.1d( v 52'1029 (0'0,52'1029] local-lis/les=70/71 n=5 ec=55/40 lis/c=70/55 les/c/f=71/57/0 sis=72 pruub=14.994022369s) [2] r=-1 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.095550537s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:19 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 05 09:49:19 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 05 09:49:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:20 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 6 objects/s recovering
Dec 05 09:49:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:49:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:49:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 05 09:49:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:20 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:20 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:20 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:20 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:20 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 05 09:49:20 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 05 09:49:20 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.rnjumh on compute-2
Dec 05 09:49:20 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.rnjumh on compute-2
Dec 05 09:49:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 05 09:49:20 compute-0 ceph-mon[74418]: 8.0 scrub starts
Dec 05 09:49:20 compute-0 ceph-mon[74418]: 8.0 scrub ok
Dec 05 09:49:20 compute-0 ceph-mon[74418]: 12.16 scrub starts
Dec 05 09:49:20 compute-0 ceph-mon[74418]: 12.16 scrub ok
Dec 05 09:49:20 compute-0 ceph-mon[74418]: 12.13 scrub starts
Dec 05 09:49:20 compute-0 ceph-mon[74418]: 12.13 scrub ok
Dec 05 09:49:20 compute-0 ceph-mon[74418]: osdmap e72: 3 total, 3 up, 3 in
Dec 05 09:49:20 compute-0 ceph-mon[74418]: pgmap v72: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 6 objects/s recovering
Dec 05 09:49:20 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:20 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:20 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:20 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:20 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:20 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 05 09:49:20 compute-0 ceph-mon[74418]: Deploying daemon keepalived.nfs.cephfs.compute-2.rnjumh on compute-2
Dec 05 09:49:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:20 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 05 09:49:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 05 09:49:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:20 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:20 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec 05 09:49:20 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 73 pg[9.e( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 73 pg[9.16( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 73 pg[9.1e( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 73 pg[9.6( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[55,72)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:21 compute-0 ceph-mon[74418]: 8.7 scrub starts
Dec 05 09:49:21 compute-0 ceph-mon[74418]: 8.7 scrub ok
Dec 05 09:49:21 compute-0 ceph-mon[74418]: 12.14 scrub starts
Dec 05 09:49:21 compute-0 ceph-mon[74418]: 12.14 scrub ok
Dec 05 09:49:21 compute-0 ceph-mon[74418]: 12.9 scrub starts
Dec 05 09:49:21 compute-0 ceph-mon[74418]: 12.9 scrub ok
Dec 05 09:49:21 compute-0 ceph-mon[74418]: osdmap e73: 3 total, 3 up, 3 in
Dec 05 09:49:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 05 09:49:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 05 09:49:21 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 74 pg[9.1e( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=5 ec=55/40 lis/c=72/55 les/c/f=73/57/0 sis=74 pruub=15.487714767s) [0] async=[0] r=-1 lpr=74 pi=[55,74)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 239.640747070s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 74 pg[9.6( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=6 ec=55/40 lis/c=72/55 les/c/f=73/57/0 sis=74 pruub=15.487657547s) [0] async=[0] r=-1 lpr=74 pi=[55,74)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 239.640701294s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 74 pg[9.6( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=6 ec=55/40 lis/c=72/55 les/c/f=73/57/0 sis=74 pruub=15.487560272s) [0] r=-1 lpr=74 pi=[55,74)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 239.640701294s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 74 pg[9.1e( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=5 ec=55/40 lis/c=72/55 les/c/f=73/57/0 sis=74 pruub=15.487585068s) [0] r=-1 lpr=74 pi=[55,74)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 239.640747070s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 74 pg[9.16( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=4 ec=55/40 lis/c=72/55 les/c/f=73/57/0 sis=74 pruub=15.487355232s) [0] async=[0] r=-1 lpr=74 pi=[55,74)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 239.640670776s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 74 pg[9.16( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=4 ec=55/40 lis/c=72/55 les/c/f=73/57/0 sis=74 pruub=15.487191200s) [0] r=-1 lpr=74 pi=[55,74)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 239.640670776s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 74 pg[9.e( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=5 ec=55/40 lis/c=72/55 les/c/f=73/57/0 sis=74 pruub=15.486189842s) [0] async=[0] r=-1 lpr=74 pi=[55,74)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 239.640655518s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:21 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 74 pg[9.e( v 52'1029 (0'0,52'1029] local-lis/les=72/73 n=5 ec=55/40 lis/c=72/55 les/c/f=73/57/0 sis=74 pruub=15.485984802s) [0] r=-1 lpr=74 pi=[55,74)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 239.640655518s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:21 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 05 09:49:21 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 05 09:49:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:22 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v75: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 6 objects/s recovering
Dec 05 09:49:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:49:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:22 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:22 compute-0 ceph-mon[74418]: 11.6 scrub starts
Dec 05 09:49:22 compute-0 ceph-mon[74418]: 11.6 scrub ok
Dec 05 09:49:22 compute-0 ceph-mon[74418]: 12.1 scrub starts
Dec 05 09:49:22 compute-0 ceph-mon[74418]: 12.1 scrub ok
Dec 05 09:49:22 compute-0 ceph-mon[74418]: 10.f scrub starts
Dec 05 09:49:22 compute-0 ceph-mon[74418]: 10.f scrub ok
Dec 05 09:49:22 compute-0 ceph-mon[74418]: osdmap e74: 3 total, 3 up, 3 in
Dec 05 09:49:22 compute-0 ceph-mon[74418]: 11.18 scrub starts
Dec 05 09:49:22 compute-0 ceph-mon[74418]: 11.18 scrub ok
Dec 05 09:49:22 compute-0 ceph-mon[74418]: pgmap v75: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 6 objects/s recovering
Dec 05 09:49:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 05 09:49:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 05 09:49:22 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 05 09:49:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:22 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:22 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 05 09:49:22 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 05 09:49:23 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 05 09:49:23 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 05 09:49:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:24 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:24 compute-0 ceph-mon[74418]: 11.f scrub starts
Dec 05 09:49:24 compute-0 ceph-mon[74418]: 11.f scrub ok
Dec 05 09:49:24 compute-0 ceph-mon[74418]: 8.d scrub starts
Dec 05 09:49:24 compute-0 ceph-mon[74418]: 8.d scrub ok
Dec 05 09:49:24 compute-0 ceph-mon[74418]: osdmap e75: 3 total, 3 up, 3 in
Dec 05 09:49:24 compute-0 ceph-mon[74418]: 8.1a scrub starts
Dec 05 09:49:24 compute-0 ceph-mon[74418]: 8.1a scrub ok
Dec 05 09:49:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 128 B/s, 5 objects/s recovering
Dec 05 09:49:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:24 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:24 compute-0 sudo[97595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ednqtatlhqdzetockvmpglqqdzbqapxk ; /usr/bin/python3'
Dec 05 09:49:24 compute-0 sudo[97595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:49:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:24 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:24 compute-0 python3[97597]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:49:24 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 05 09:49:24 compute-0 podman[97598]: 2025-12-05 09:49:24.985815434 +0000 UTC m=+0.072727166 container create 729158832cc78ccc5cdbb6db9fef6d4e68fc41626669b13dc6026338b5c33248 (image=quay.io/ceph/ceph:v19, name=priceless_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 05 09:49:24 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 05 09:49:25 compute-0 systemd[1]: Started libpod-conmon-729158832cc78ccc5cdbb6db9fef6d4e68fc41626669b13dc6026338b5c33248.scope.
Dec 05 09:49:25 compute-0 podman[97598]: 2025-12-05 09:49:24.957818548 +0000 UTC m=+0.044730360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:49:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5923823c41a05261c7540804d85f756a69f53a74c5f1b7424a00c6df97e2995e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5923823c41a05261c7540804d85f756a69f53a74c5f1b7424a00c6df97e2995e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:25 compute-0 podman[97598]: 2025-12-05 09:49:25.087851938 +0000 UTC m=+0.174763770 container init 729158832cc78ccc5cdbb6db9fef6d4e68fc41626669b13dc6026338b5c33248 (image=quay.io/ceph/ceph:v19, name=priceless_cori, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 09:49:25 compute-0 podman[97598]: 2025-12-05 09:49:25.095345421 +0000 UTC m=+0.182257173 container start 729158832cc78ccc5cdbb6db9fef6d4e68fc41626669b13dc6026338b5c33248 (image=quay.io/ceph/ceph:v19, name=priceless_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 05 09:49:25 compute-0 podman[97598]: 2025-12-05 09:49:25.100606058 +0000 UTC m=+0.187517820 container attach 729158832cc78ccc5cdbb6db9fef6d4e68fc41626669b13dc6026338b5c33248 (image=quay.io/ceph/ceph:v19, name=priceless_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 09:49:25 compute-0 ceph-mon[74418]: 11.1 scrub starts
Dec 05 09:49:25 compute-0 ceph-mon[74418]: 11.1 scrub ok
Dec 05 09:49:25 compute-0 ceph-mon[74418]: 11.e scrub starts
Dec 05 09:49:25 compute-0 ceph-mon[74418]: 11.e scrub ok
Dec 05 09:49:25 compute-0 ceph-mon[74418]: 8.1e scrub starts
Dec 05 09:49:25 compute-0 ceph-mon[74418]: 8.1e scrub ok
Dec 05 09:49:25 compute-0 ceph-mon[74418]: 8.8 scrub starts
Dec 05 09:49:25 compute-0 ceph-mon[74418]: 8.8 scrub ok
Dec 05 09:49:25 compute-0 ceph-mon[74418]: pgmap v77: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 128 B/s, 5 objects/s recovering
Dec 05 09:49:25 compute-0 ceph-mon[74418]: 10.1 scrub starts
Dec 05 09:49:25 compute-0 ceph-mon[74418]: 10.1 scrub ok
Dec 05 09:49:25 compute-0 priceless_cori[97613]: could not fetch user info: no user info saved
Dec 05 09:49:25 compute-0 systemd[1]: libpod-729158832cc78ccc5cdbb6db9fef6d4e68fc41626669b13dc6026338b5c33248.scope: Deactivated successfully.
Dec 05 09:49:25 compute-0 podman[97598]: 2025-12-05 09:49:25.368261361 +0000 UTC m=+0.455173103 container died 729158832cc78ccc5cdbb6db9fef6d4e68fc41626669b13dc6026338b5c33248 (image=quay.io/ceph/ceph:v19, name=priceless_cori, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 09:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5923823c41a05261c7540804d85f756a69f53a74c5f1b7424a00c6df97e2995e-merged.mount: Deactivated successfully.
Dec 05 09:49:25 compute-0 podman[97598]: 2025-12-05 09:49:25.421665285 +0000 UTC m=+0.508577007 container remove 729158832cc78ccc5cdbb6db9fef6d4e68fc41626669b13dc6026338b5c33248 (image=quay.io/ceph/ceph:v19, name=priceless_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 09:49:25 compute-0 systemd[1]: libpod-conmon-729158832cc78ccc5cdbb6db9fef6d4e68fc41626669b13dc6026338b5c33248.scope: Deactivated successfully.
Dec 05 09:49:25 compute-0 sudo[97595]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:49:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:49:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 05 09:49:25 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:25 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:25 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:25 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 05 09:49:25 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 05 09:49:25 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:25 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:25 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.ewczkf on compute-0
Dec 05 09:49:25 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.ewczkf on compute-0
Dec 05 09:49:25 compute-0 sudo[97754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzdgkgkodgiysffsretmielppdeyfnfk ; /usr/bin/python3'
Dec 05 09:49:25 compute-0 sudo[97754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:49:25 compute-0 sudo[97719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:49:25 compute-0 sudo[97719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:49:25 compute-0 sudo[97719]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:25 compute-0 sudo[97764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:49:25 compute-0 sudo[97764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:49:25 compute-0 python3[97761]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:49:25 compute-0 podman[97789]: 2025-12-05 09:49:25.846067708 +0000 UTC m=+0.089419507 container create 3b7a423f8cca2f04ec56f5e92fc39cce6603926566f65a935c235cf15683d772 (image=quay.io/ceph/ceph:v19, name=stupefied_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:49:25 compute-0 podman[97789]: 2025-12-05 09:49:25.78278254 +0000 UTC m=+0.026134439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:49:25 compute-0 systemd[1]: Started libpod-conmon-3b7a423f8cca2f04ec56f5e92fc39cce6603926566f65a935c235cf15683d772.scope.
Dec 05 09:49:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b2d6f0c421ab9f5324e01a3f3c9bf8e3e0486c2652618e9e82cb0433c07da8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b2d6f0c421ab9f5324e01a3f3c9bf8e3e0486c2652618e9e82cb0433c07da8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:25 compute-0 podman[97789]: 2025-12-05 09:49:25.934141241 +0000 UTC m=+0.177493060 container init 3b7a423f8cca2f04ec56f5e92fc39cce6603926566f65a935c235cf15683d772 (image=quay.io/ceph/ceph:v19, name=stupefied_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 05 09:49:25 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 05 09:49:25 compute-0 podman[97789]: 2025-12-05 09:49:25.94298831 +0000 UTC m=+0.186340109 container start 3b7a423f8cca2f04ec56f5e92fc39cce6603926566f65a935c235cf15683d772 (image=quay.io/ceph/ceph:v19, name=stupefied_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 09:49:25 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 05 09:49:25 compute-0 podman[97789]: 2025-12-05 09:49:25.949723874 +0000 UTC m=+0.193075683 container attach 3b7a423f8cca2f04ec56f5e92fc39cce6603926566f65a935c235cf15683d772 (image=quay.io/ceph/ceph:v19, name=stupefied_mayer, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:49:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:26 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v78: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 3 objects/s recovering
Dec 05 09:49:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 05 09:49:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]: {
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "user_id": "openstack",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "display_name": "openstack",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "email": "",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "suspended": 0,
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "max_buckets": 1000,
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "subusers": [],
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "keys": [
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         {
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:             "user": "openstack",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:             "access_key": "U80NU66DSMV6CZPGBNH1",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:             "secret_key": "2YaAiu4DxtJ3XUdkZ9IbKXUG2ERcZJOUcdXtkAxx",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:             "active": true,
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:             "create_date": "2025-12-05T09:49:26.149355Z"
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         }
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     ],
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "swift_keys": [],
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "caps": [],
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "op_mask": "read, write, delete",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "default_placement": "",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "default_storage_class": "",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "placement_tags": [],
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "bucket_quota": {
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         "enabled": false,
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         "check_on_raw": false,
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         "max_size": -1,
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         "max_size_kb": 0,
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         "max_objects": -1
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     },
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "user_quota": {
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         "enabled": false,
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         "check_on_raw": false,
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         "max_size": -1,
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         "max_size_kb": 0,
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:         "max_objects": -1
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     },
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "temp_url_keys": [],
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "type": "rgw",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "mfa_ids": [],
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "account_id": "",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "path": "/",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "create_date": "2025-12-05T09:49:26.148301Z",
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "tags": [],
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]:     "group_ids": []
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]: }
Dec 05 09:49:26 compute-0 stupefied_mayer[97816]: 
Dec 05 09:49:26 compute-0 systemd[1]: libpod-3b7a423f8cca2f04ec56f5e92fc39cce6603926566f65a935c235cf15683d772.scope: Deactivated successfully.
Dec 05 09:49:26 compute-0 podman[97789]: 2025-12-05 09:49:26.219686688 +0000 UTC m=+0.463038497 container died 3b7a423f8cca2f04ec56f5e92fc39cce6603926566f65a935c235cf15683d772 (image=quay.io/ceph/ceph:v19, name=stupefied_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 09:49:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-65b2d6f0c421ab9f5324e01a3f3c9bf8e3e0486c2652618e9e82cb0433c07da8-merged.mount: Deactivated successfully.
Dec 05 09:49:26 compute-0 podman[97789]: 2025-12-05 09:49:26.255010132 +0000 UTC m=+0.498361941 container remove 3b7a423f8cca2f04ec56f5e92fc39cce6603926566f65a935c235cf15683d772 (image=quay.io/ceph/ceph:v19, name=stupefied_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 09:49:26 compute-0 systemd[1]: libpod-conmon-3b7a423f8cca2f04ec56f5e92fc39cce6603926566f65a935c235cf15683d772.scope: Deactivated successfully.
Dec 05 09:49:26 compute-0 sudo[97754]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:26 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 05 09:49:26 compute-0 ceph-mon[74418]: 8.1d scrub starts
Dec 05 09:49:26 compute-0 ceph-mon[74418]: 8.1d scrub ok
Dec 05 09:49:26 compute-0 ceph-mon[74418]: 11.14 scrub starts
Dec 05 09:49:26 compute-0 ceph-mon[74418]: 11.14 scrub ok
Dec 05 09:49:26 compute-0 ceph-mon[74418]: 12.11 deep-scrub starts
Dec 05 09:49:26 compute-0 ceph-mon[74418]: 12.11 deep-scrub ok
Dec 05 09:49:26 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:26 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:26 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:26 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:26 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 05 09:49:26 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:26 compute-0 ceph-mon[74418]: Deploying daemon keepalived.nfs.cephfs.compute-0.ewczkf on compute-0
Dec 05 09:49:26 compute-0 ceph-mon[74418]: pgmap v78: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 3 objects/s recovering
Dec 05 09:49:26 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 05 09:49:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:26 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:26 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 05 09:49:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 05 09:49:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 05 09:49:26 compute-0 python3[97985]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:49:26 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.10 deep-scrub starts
Dec 05 09:49:27 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.10 deep-scrub ok
Dec 05 09:49:27 compute-0 ceph-mgr[74711]: [dashboard INFO request] [192.168.122.100:58486] [GET] [200] [0.125s] [6.3K] [7b898699-0898-4ae0-af22-e4facc7235a8] /
Dec 05 09:49:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:49:27 compute-0 ceph-mon[74418]: 11.1f scrub starts
Dec 05 09:49:27 compute-0 ceph-mon[74418]: 11.1f scrub ok
Dec 05 09:49:27 compute-0 ceph-mon[74418]: 11.12 scrub starts
Dec 05 09:49:27 compute-0 ceph-mon[74418]: 11.12 scrub ok
Dec 05 09:49:27 compute-0 ceph-mon[74418]: 8.2 scrub starts
Dec 05 09:49:27 compute-0 ceph-mon[74418]: 8.2 scrub ok
Dec 05 09:49:27 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 05 09:49:27 compute-0 ceph-mon[74418]: osdmap e76: 3 total, 3 up, 3 in
Dec 05 09:49:27 compute-0 ceph-mon[74418]: 11.10 deep-scrub starts
Dec 05 09:49:27 compute-0 ceph-mon[74418]: 11.10 deep-scrub ok
Dec 05 09:49:27 compute-0 ceph-mon[74418]: 11.a scrub starts
Dec 05 09:49:27 compute-0 ceph-mon[74418]: 11.a scrub ok
Dec 05 09:49:27 compute-0 python3[98021]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:49:27 compute-0 ceph-mgr[74711]: [dashboard INFO request] [192.168.122.100:58494] [GET] [200] [0.002s] [6.3K] [cc8cdad4-705e-4a14-8ef3-7b594c764cdc] /
Dec 05 09:49:28 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 05 09:49:28 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 05 09:49:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:28 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 932 B/s rd, 0 op/s; 33 B/s, 3 objects/s recovering
Dec 05 09:49:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 05 09:49:28 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 05 09:49:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:28 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 05 09:49:28 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec 05 09:49:28 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec 05 09:49:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:29 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 05 09:49:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 05 09:49:29 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 77 pg[9.8( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=77 pruub=10.437751770s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 242.101852417s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:29 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 77 pg[9.8( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=77 pruub=10.437701225s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.101852417s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:29 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 77 pg[9.18( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=77 pruub=10.437543869s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 242.102554321s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:29 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 77 pg[9.18( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=77 pruub=10.437493324s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.102554321s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:29 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 05 09:49:29 compute-0 ceph-mon[74418]: 11.5 scrub starts
Dec 05 09:49:29 compute-0 ceph-mon[74418]: 11.5 scrub ok
Dec 05 09:49:29 compute-0 ceph-mon[74418]: 8.13 scrub starts
Dec 05 09:49:29 compute-0 ceph-mon[74418]: 8.13 scrub ok
Dec 05 09:49:29 compute-0 ceph-mon[74418]: pgmap v80: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 932 B/s rd, 0 op/s; 33 B/s, 3 objects/s recovering
Dec 05 09:49:29 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 05 09:49:29 compute-0 ceph-mon[74418]: 12.17 scrub starts
Dec 05 09:49:29 compute-0 ceph-mon[74418]: 12.17 scrub ok
Dec 05 09:49:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/094929 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:49:29 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.19 deep-scrub starts
Dec 05 09:49:29 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.19 deep-scrub ok
Dec 05 09:49:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:30 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 2 objects/s recovering
Dec 05 09:49:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:30 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 05 09:49:30 compute-0 ceph-mon[74418]: 11.4 scrub starts
Dec 05 09:49:30 compute-0 ceph-mon[74418]: 11.4 scrub ok
Dec 05 09:49:30 compute-0 ceph-mon[74418]: 11.11 scrub starts
Dec 05 09:49:30 compute-0 ceph-mon[74418]: 11.11 scrub ok
Dec 05 09:49:30 compute-0 ceph-mon[74418]: 11.7 scrub starts
Dec 05 09:49:30 compute-0 ceph-mon[74418]: 11.7 scrub ok
Dec 05 09:49:30 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 05 09:49:30 compute-0 ceph-mon[74418]: osdmap e77: 3 total, 3 up, 3 in
Dec 05 09:49:30 compute-0 ceph-mon[74418]: 11.16 scrub starts
Dec 05 09:49:30 compute-0 ceph-mon[74418]: 11.16 scrub ok
Dec 05 09:49:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 05 09:49:30 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 05 09:49:30 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 78 pg[9.8( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:30 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 78 pg[9.18( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:30 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 78 pg[9.8( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:30 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 78 pg[9.18( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:30 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Dec 05 09:49:30 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Dec 05 09:49:30 compute-0 podman[97924]: 2025-12-05 09:49:30.977502241 +0000 UTC m=+4.910868480 container create 531ba085eeb900a2637c7f12c0c591c7c4fcd601ff00ca67ae726941955e3053 (image=quay.io/ceph/keepalived:2.2.4, name=zealous_chebyshev, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-type=git, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, release=1793, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec 05 09:49:30 compute-0 podman[97924]: 2025-12-05 09:49:30.962917963 +0000 UTC m=+4.896284232 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 05 09:49:31 compute-0 systemd[1]: Started libpod-conmon-531ba085eeb900a2637c7f12c0c591c7c4fcd601ff00ca67ae726941955e3053.scope.
Dec 05 09:49:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:49:31 compute-0 podman[97924]: 2025-12-05 09:49:31.049999669 +0000 UTC m=+4.983365918 container init 531ba085eeb900a2637c7f12c0c591c7c4fcd601ff00ca67ae726941955e3053 (image=quay.io/ceph/keepalived:2.2.4, name=zealous_chebyshev, io.openshift.expose-services=, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, name=keepalived, io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, release=1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64)
Dec 05 09:49:31 compute-0 podman[97924]: 2025-12-05 09:49:31.058340585 +0000 UTC m=+4.991706824 container start 531ba085eeb900a2637c7f12c0c591c7c4fcd601ff00ca67ae726941955e3053 (image=quay.io/ceph/keepalived:2.2.4, name=zealous_chebyshev, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.openshift.expose-services=, name=keepalived, version=2.2.4, vcs-type=git, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:49:31 compute-0 podman[97924]: 2025-12-05 09:49:31.061011804 +0000 UTC m=+4.994378063 container attach 531ba085eeb900a2637c7f12c0c591c7c4fcd601ff00ca67ae726941955e3053 (image=quay.io/ceph/keepalived:2.2.4, name=zealous_chebyshev, io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:49:31 compute-0 zealous_chebyshev[98088]: 0 0
Dec 05 09:49:31 compute-0 systemd[1]: libpod-531ba085eeb900a2637c7f12c0c591c7c4fcd601ff00ca67ae726941955e3053.scope: Deactivated successfully.
Dec 05 09:49:31 compute-0 podman[97924]: 2025-12-05 09:49:31.063459498 +0000 UTC m=+4.996825737 container died 531ba085eeb900a2637c7f12c0c591c7c4fcd601ff00ca67ae726941955e3053 (image=quay.io/ceph/keepalived:2.2.4, name=zealous_chebyshev, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1793, vcs-type=git, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, architecture=x86_64, version=2.2.4)
Dec 05 09:49:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-64b0e6f0038eea213906d2340ee09a4104fd0defdd291a43e665318ccb23af30-merged.mount: Deactivated successfully.
Dec 05 09:49:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:31 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:31 compute-0 podman[97924]: 2025-12-05 09:49:31.09909875 +0000 UTC m=+5.032464989 container remove 531ba085eeb900a2637c7f12c0c591c7c4fcd601ff00ca67ae726941955e3053 (image=quay.io/ceph/keepalived:2.2.4, name=zealous_chebyshev, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, architecture=x86_64)
Dec 05 09:49:31 compute-0 systemd[1]: libpod-conmon-531ba085eeb900a2637c7f12c0c591c7c4fcd601ff00ca67ae726941955e3053.scope: Deactivated successfully.
Dec 05 09:49:31 compute-0 systemd[1]: Reloading.
Dec 05 09:49:31 compute-0 systemd-sysv-generator[98133]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:49:31 compute-0 systemd-rc-local-generator[98127]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:49:31 compute-0 systemd[1]: Reloading.
Dec 05 09:49:31 compute-0 systemd-sysv-generator[98178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:49:31 compute-0 systemd-rc-local-generator[98174]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:49:31 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.ewczkf for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:49:31 compute-0 ceph-mon[74418]: 12.19 deep-scrub starts
Dec 05 09:49:31 compute-0 ceph-mon[74418]: 12.19 deep-scrub ok
Dec 05 09:49:31 compute-0 ceph-mon[74418]: 8.4 scrub starts
Dec 05 09:49:31 compute-0 ceph-mon[74418]: 8.4 scrub ok
Dec 05 09:49:31 compute-0 ceph-mon[74418]: pgmap v82: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 2 objects/s recovering
Dec 05 09:49:31 compute-0 ceph-mon[74418]: 8.9 scrub starts
Dec 05 09:49:31 compute-0 ceph-mon[74418]: 8.9 scrub ok
Dec 05 09:49:31 compute-0 ceph-mon[74418]: osdmap e78: 3 total, 3 up, 3 in
Dec 05 09:49:31 compute-0 ceph-mon[74418]: 12.1c scrub starts
Dec 05 09:49:31 compute-0 ceph-mon[74418]: 12.1c scrub ok
Dec 05 09:49:31 compute-0 ceph-mon[74418]: 8.b scrub starts
Dec 05 09:49:31 compute-0 ceph-mon[74418]: 8.b scrub ok
Dec 05 09:49:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 05 09:49:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 05 09:49:31 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 05 09:49:31 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 79 pg[9.8( v 52'1029 (0'0,52'1029] local-lis/les=78/79 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[55,78)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:31 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 79 pg[9.18( v 52'1029 (0'0,52'1029] local-lis/les=78/79 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[55,78)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:31 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec 05 09:49:31 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec 05 09:49:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:32 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:32 compute-0 podman[98227]: 2025-12-05 09:49:32.05679597 +0000 UTC m=+0.066009291 container create f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, distribution-scope=public, architecture=x86_64, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=)
Dec 05 09:49:32 compute-0 podman[98227]: 2025-12-05 09:49:32.028144178 +0000 UTC m=+0.037357489 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 05 09:49:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fe6b7b965727fc990a6b574e6722375dbc470a1affa79d95169e9f3951bba/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:32 compute-0 podman[98227]: 2025-12-05 09:49:32.133283832 +0000 UTC m=+0.142497133 container init f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, io.openshift.expose-services=, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.buildah.version=1.28.2, description=keepalived for Ceph, vendor=Red Hat, Inc.)
Dec 05 09:49:32 compute-0 podman[98227]: 2025-12-05 09:49:32.142189852 +0000 UTC m=+0.151403103 container start f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, distribution-scope=public, vcs-type=git, io.buildah.version=1.28.2)
Dec 05 09:49:32 compute-0 bash[98227]: f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2
Dec 05 09:49:32 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.ewczkf for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:49:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:49:32 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 05 09:49:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:49:32 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec 05 09:49:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:49:32 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 05 09:49:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:49:32 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 05 09:49:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:49:32 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 05 09:49:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:49:32 2025: Starting VRRP child process, pid=4
Dec 05 09:49:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:49:32 2025: Startup complete
Dec 05 09:49:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:49:32 2025: (VI_0) Entering BACKUP STATE (init)
Dec 05 09:49:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:49:32 2025: VRRP_Script(check_backend) succeeded
Dec 05 09:49:32 compute-0 sudo[97764]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:49:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:49:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 05 09:49:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:32 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev c7aa2a0f-8751-44fe-b93f-5f117475e4b1 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec 05 09:49:32 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event c7aa2a0f-8751-44fe-b93f-5f117475e4b1 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 45 seconds
Dec 05 09:49:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 05 09:49:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:32 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 961a0d1f-13bd-40e2-99f5-53483930855c (Updating alertmanager deployment (+1 -> 1))
Dec 05 09:49:32 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Dec 05 09:49:32 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Dec 05 09:49:32 compute-0 sudo[98252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:49:32 compute-0 sudo[98252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:49:32 compute-0 sudo[98252]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:32 compute-0 sudo[98277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:49:32 compute-0 sudo[98277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:49:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:49:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 05 09:49:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:32 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 05 09:49:32 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 05 09:49:32 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 80 pg[9.8( v 52'1029 (0'0,52'1029] local-lis/les=78/79 n=6 ec=55/40 lis/c=78/55 les/c/f=79/57/0 sis=80 pruub=15.238727570s) [2] async=[2] r=-1 lpr=80 pi=[55,80)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 250.394866943s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:32 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 80 pg[9.18( v 52'1029 (0'0,52'1029] local-lis/les=78/79 n=5 ec=55/40 lis/c=78/55 les/c/f=79/57/0 sis=80 pruub=15.238703728s) [2] async=[2] r=-1 lpr=80 pi=[55,80)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 250.394866943s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:32 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 80 pg[9.18( v 52'1029 (0'0,52'1029] local-lis/les=78/79 n=5 ec=55/40 lis/c=78/55 les/c/f=79/57/0 sis=80 pruub=15.238658905s) [2] r=-1 lpr=80 pi=[55,80)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.394866943s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:32 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 80 pg[9.8( v 52'1029 (0'0,52'1029] local-lis/les=78/79 n=6 ec=55/40 lis/c=78/55 les/c/f=79/57/0 sis=80 pruub=15.238634109s) [2] r=-1 lpr=80 pi=[55,80)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.394866943s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:32 compute-0 ceph-mon[74418]: 8.1b scrub starts
Dec 05 09:49:32 compute-0 ceph-mon[74418]: 8.1b scrub ok
Dec 05 09:49:32 compute-0 ceph-mon[74418]: osdmap e79: 3 total, 3 up, 3 in
Dec 05 09:49:32 compute-0 ceph-mon[74418]: 8.18 scrub starts
Dec 05 09:49:32 compute-0 ceph-mon[74418]: 10.5 scrub starts
Dec 05 09:49:32 compute-0 ceph-mon[74418]: 8.18 scrub ok
Dec 05 09:49:32 compute-0 ceph-mon[74418]: 10.5 scrub ok
Dec 05 09:49:32 compute-0 ceph-mon[74418]: pgmap v85: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:32 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:32 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:32 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:32 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:32 compute-0 ceph-mon[74418]: 11.13 scrub starts
Dec 05 09:49:32 compute-0 ceph-mon[74418]: 11.13 scrub ok
Dec 05 09:49:32 compute-0 ceph-mon[74418]: Deploying daemon alertmanager.compute-0 on compute-0
Dec 05 09:49:32 compute-0 ceph-mon[74418]: osdmap e80: 3 total, 3 up, 3 in
Dec 05 09:49:32 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 05 09:49:32 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 05 09:49:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:33 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c002010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 05 09:49:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 05 09:49:33 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 05 09:49:33 compute-0 ceph-mon[74418]: 10.2 scrub starts
Dec 05 09:49:33 compute-0 ceph-mon[74418]: 10.2 scrub ok
Dec 05 09:49:33 compute-0 ceph-mon[74418]: 11.1b scrub starts
Dec 05 09:49:33 compute-0 ceph-mon[74418]: 8.a scrub starts
Dec 05 09:49:33 compute-0 ceph-mon[74418]: 8.a scrub ok
Dec 05 09:49:33 compute-0 ceph-mon[74418]: osdmap e81: 3 total, 3 up, 3 in
Dec 05 09:49:33 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.a deep-scrub starts
Dec 05 09:49:33 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.a deep-scrub ok
Dec 05 09:49:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:34 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:34 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:34 compute-0 podman[98344]: 2025-12-05 09:49:34.680830845 +0000 UTC m=+1.906448208 volume create 253364d6814b274ed972c6cf77ad3ad989a648684453681d70108b7f15228ea8
Dec 05 09:49:34 compute-0 podman[98344]: 2025-12-05 09:49:34.693466382 +0000 UTC m=+1.919083745 container create c258226e4a20e5c039f0ae3296f187b087498587df4b55b25aff86aa639e33a0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=tender_easley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:34 compute-0 podman[98344]: 2025-12-05 09:49:34.658074786 +0000 UTC m=+1.883692119 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 05 09:49:34 compute-0 systemd[1]: Started libpod-conmon-c258226e4a20e5c039f0ae3296f187b087498587df4b55b25aff86aa639e33a0.scope.
Dec 05 09:49:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac35bf070752c7b7fcf7fe73067bb9bbb1d51d9011759478f04652efc2f363c/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:34 compute-0 podman[98344]: 2025-12-05 09:49:34.797698213 +0000 UTC m=+2.023315536 container init c258226e4a20e5c039f0ae3296f187b087498587df4b55b25aff86aa639e33a0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=tender_easley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:34 compute-0 podman[98344]: 2025-12-05 09:49:34.805048193 +0000 UTC m=+2.030665536 container start c258226e4a20e5c039f0ae3296f187b087498587df4b55b25aff86aa639e33a0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=tender_easley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:34 compute-0 tender_easley[98481]: 65534 65534
Dec 05 09:49:34 compute-0 systemd[1]: libpod-c258226e4a20e5c039f0ae3296f187b087498587df4b55b25aff86aa639e33a0.scope: Deactivated successfully.
Dec 05 09:49:34 compute-0 podman[98344]: 2025-12-05 09:49:34.80916869 +0000 UTC m=+2.034786043 container attach c258226e4a20e5c039f0ae3296f187b087498587df4b55b25aff86aa639e33a0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=tender_easley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:34 compute-0 podman[98344]: 2025-12-05 09:49:34.80955032 +0000 UTC m=+2.035167643 container died c258226e4a20e5c039f0ae3296f187b087498587df4b55b25aff86aa639e33a0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=tender_easley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-aac35bf070752c7b7fcf7fe73067bb9bbb1d51d9011759478f04652efc2f363c-merged.mount: Deactivated successfully.
Dec 05 09:49:34 compute-0 ceph-mon[74418]: 11.1b scrub ok
Dec 05 09:49:34 compute-0 ceph-mon[74418]: 12.a deep-scrub starts
Dec 05 09:49:34 compute-0 ceph-mon[74418]: 12.a deep-scrub ok
Dec 05 09:49:34 compute-0 ceph-mon[74418]: pgmap v88: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:34 compute-0 ceph-mon[74418]: 12.18 scrub starts
Dec 05 09:49:34 compute-0 ceph-mon[74418]: 12.18 scrub ok
Dec 05 09:49:34 compute-0 podman[98344]: 2025-12-05 09:49:34.858791406 +0000 UTC m=+2.084408729 container remove c258226e4a20e5c039f0ae3296f187b087498587df4b55b25aff86aa639e33a0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=tender_easley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:34 compute-0 podman[98344]: 2025-12-05 09:49:34.869294327 +0000 UTC m=+2.094911650 volume remove 253364d6814b274ed972c6cf77ad3ad989a648684453681d70108b7f15228ea8
Dec 05 09:49:34 compute-0 systemd[1]: libpod-conmon-c258226e4a20e5c039f0ae3296f187b087498587df4b55b25aff86aa639e33a0.scope: Deactivated successfully.
Dec 05 09:49:34 compute-0 podman[98497]: 2025-12-05 09:49:34.955997574 +0000 UTC m=+0.065136139 volume create 14b3dc5cf2dc559561296eeafe1b1022828495d62669715826c349787fa80cee
Dec 05 09:49:34 compute-0 podman[98497]: 2025-12-05 09:49:34.997286153 +0000 UTC m=+0.106424718 container create 03b19a20adcc0b99bfc7dca7c0e205ed25ad890f95580df8bbb5b6f11254cfe9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=brave_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:35 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 05 09:49:35 compute-0 podman[98497]: 2025-12-05 09:49:34.916654914 +0000 UTC m=+0.025793499 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 05 09:49:35 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 05 09:49:35 compute-0 systemd[1]: Started libpod-conmon-03b19a20adcc0b99bfc7dca7c0e205ed25ad890f95580df8bbb5b6f11254cfe9.scope.
Dec 05 09:49:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace88ea80fa2411a60bad9321f114778aa5825e83e3d3749626417e90692ccc8/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:35 compute-0 podman[98497]: 2025-12-05 09:49:35.086578636 +0000 UTC m=+0.195717231 container init 03b19a20adcc0b99bfc7dca7c0e205ed25ad890f95580df8bbb5b6f11254cfe9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=brave_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:35 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:35 compute-0 podman[98497]: 2025-12-05 09:49:35.093900135 +0000 UTC m=+0.203038700 container start 03b19a20adcc0b99bfc7dca7c0e205ed25ad890f95580df8bbb5b6f11254cfe9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=brave_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:35 compute-0 brave_swirles[98513]: 65534 65534
Dec 05 09:49:35 compute-0 systemd[1]: libpod-03b19a20adcc0b99bfc7dca7c0e205ed25ad890f95580df8bbb5b6f11254cfe9.scope: Deactivated successfully.
Dec 05 09:49:35 compute-0 podman[98497]: 2025-12-05 09:49:35.098001422 +0000 UTC m=+0.207140077 container attach 03b19a20adcc0b99bfc7dca7c0e205ed25ad890f95580df8bbb5b6f11254cfe9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=brave_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:35 compute-0 podman[98497]: 2025-12-05 09:49:35.098576907 +0000 UTC m=+0.207715492 container died 03b19a20adcc0b99bfc7dca7c0e205ed25ad890f95580df8bbb5b6f11254cfe9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=brave_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ace88ea80fa2411a60bad9321f114778aa5825e83e3d3749626417e90692ccc8-merged.mount: Deactivated successfully.
Dec 05 09:49:35 compute-0 podman[98497]: 2025-12-05 09:49:35.144293601 +0000 UTC m=+0.253432166 container remove 03b19a20adcc0b99bfc7dca7c0e205ed25ad890f95580df8bbb5b6f11254cfe9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=brave_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:35 compute-0 podman[98497]: 2025-12-05 09:49:35.147624048 +0000 UTC m=+0.256762613 volume remove 14b3dc5cf2dc559561296eeafe1b1022828495d62669715826c349787fa80cee
Dec 05 09:49:35 compute-0 systemd[1]: libpod-conmon-03b19a20adcc0b99bfc7dca7c0e205ed25ad890f95580df8bbb5b6f11254cfe9.scope: Deactivated successfully.
Dec 05 09:49:35 compute-0 systemd[1]: Reloading.
Dec 05 09:49:35 compute-0 systemd-rc-local-generator[98557]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:49:35 compute-0 systemd-sysv-generator[98561]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:49:35 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 23 completed events
Dec 05 09:49:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:49:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:35 compute-0 systemd[1]: Reloading.
Dec 05 09:49:35 compute-0 systemd-sysv-generator[98598]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:49:35 compute-0 systemd-rc-local-generator[98594]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:49:35 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:49:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:49:35 2025: (VI_0) Entering MASTER STATE
Dec 05 09:49:35 compute-0 ceph-mon[74418]: 11.1d scrub starts
Dec 05 09:49:35 compute-0 ceph-mon[74418]: 11.1d scrub ok
Dec 05 09:49:35 compute-0 ceph-mon[74418]: 10.19 scrub starts
Dec 05 09:49:35 compute-0 ceph-mon[74418]: 10.19 scrub ok
Dec 05 09:49:35 compute-0 ceph-mon[74418]: 12.1a scrub starts
Dec 05 09:49:35 compute-0 ceph-mon[74418]: 12.1a scrub ok
Dec 05 09:49:35 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:36 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.e deep-scrub starts
Dec 05 09:49:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s; 41 B/s, 1 objects/s recovering
Dec 05 09:49:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 05 09:49:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 05 09:49:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:36 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c002010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:36 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.e deep-scrub ok
Dec 05 09:49:36 compute-0 podman[98655]: 2025-12-05 09:49:36.283919953 +0000 UTC m=+0.021991191 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 05 09:49:36 compute-0 podman[98655]: 2025-12-05 09:49:36.387174938 +0000 UTC m=+0.125246126 volume create 3ce96c36949a79e265f28d2e4dd682f09944630357c074912d3a86f2ec1e3f05
Dec 05 09:49:36 compute-0 podman[98655]: 2025-12-05 09:49:36.398402159 +0000 UTC m=+0.136473357 container create aa11c6973d139c2e9bb6746f25caf931656607e7034cefb81d97cc477f867cd1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58a425ce4bbec0686de465a07de8d6f5898fe9b8f5276797892f0af8beef35d/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58a425ce4bbec0686de465a07de8d6f5898fe9b8f5276797892f0af8beef35d/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:36 compute-0 podman[98655]: 2025-12-05 09:49:36.477788425 +0000 UTC m=+0.215859643 container init aa11c6973d139c2e9bb6746f25caf931656607e7034cefb81d97cc477f867cd1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:36 compute-0 podman[98655]: 2025-12-05 09:49:36.483070312 +0000 UTC m=+0.221141510 container start aa11c6973d139c2e9bb6746f25caf931656607e7034cefb81d97cc477f867cd1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:49:36 compute-0 bash[98655]: aa11c6973d139c2e9bb6746f25caf931656607e7034cefb81d97cc477f867cd1
Dec 05 09:49:36 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:49:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[98670]: ts=2025-12-05T09:49:36.520Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec 05 09:49:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[98670]: ts=2025-12-05T09:49:36.520Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec 05 09:49:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[98670]: ts=2025-12-05T09:49:36.530Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec 05 09:49:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[98670]: ts=2025-12-05T09:49:36.532Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec 05 09:49:36 compute-0 sudo[98277]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:49:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:49:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:36 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 05 09:49:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[98670]: ts=2025-12-05T09:49:36.575Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 05 09:49:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[98670]: ts=2025-12-05T09:49:36.576Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 05 09:49:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[98670]: ts=2025-12-05T09:49:36.581Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec 05 09:49:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[98670]: ts=2025-12-05T09:49:36.581Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec 05 09:49:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:36 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 961a0d1f-13bd-40e2-99f5-53483930855c (Updating alertmanager deployment (+1 -> 1))
Dec 05 09:49:36 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 961a0d1f-13bd-40e2-99f5-53483930855c (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Dec 05 09:49:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 05 09:49:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:36 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 1dc43ed1-ca1e-4fa8-a5b2-050f17811a60 (Updating grafana deployment (+1 -> 1))
Dec 05 09:49:36 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Dec 05 09:49:36 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Dec 05 09:49:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Dec 05 09:49:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Dec 05 09:49:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 05 09:49:37 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Dec 05 09:49:37 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Dec 05 09:49:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:37 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Dec 05 09:49:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 05 09:49:37 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 05 09:49:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Dec 05 09:49:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 05 09:49:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 05 09:49:37 compute-0 ceph-mon[74418]: 11.1e scrub starts
Dec 05 09:49:37 compute-0 ceph-mon[74418]: 11.1e scrub ok
Dec 05 09:49:37 compute-0 ceph-mon[74418]: 11.1a scrub starts
Dec 05 09:49:37 compute-0 ceph-mon[74418]: 12.e deep-scrub starts
Dec 05 09:49:37 compute-0 ceph-mon[74418]: 11.1a scrub ok
Dec 05 09:49:37 compute-0 ceph-mon[74418]: pgmap v89: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s; 41 B/s, 1 objects/s recovering
Dec 05 09:49:37 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 05 09:49:37 compute-0 ceph-mon[74418]: 12.e deep-scrub ok
Dec 05 09:49:37 compute-0 ceph-mon[74418]: 8.6 deep-scrub starts
Dec 05 09:49:37 compute-0 ceph-mon[74418]: 8.6 deep-scrub ok
Dec 05 09:49:37 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:37 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:37 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:37 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:37 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:37 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 05 09:49:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:37 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Dec 05 09:49:37 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Dec 05 09:49:37 compute-0 sudo[98692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:49:37 compute-0 sudo[98692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:49:37 compute-0 sudo[98692]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:37 compute-0 sudo[98717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:49:37 compute-0 sudo[98717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:49:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:49:38 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.b scrub starts
Dec 05 09:49:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:38 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:38 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.b scrub ok
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 82 pg[9.9( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=82 pruub=9.460470200s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 250.101806641s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 82 pg[9.9( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=82 pruub=9.460437775s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.101806641s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 82 pg[9.19( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=82 pruub=9.460659981s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 250.102645874s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 82 pg[9.19( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=82 pruub=9.460627556s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.102645874s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:38 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:49:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 23 op/s; 36 B/s, 1 objects/s recovering
Dec 05 09:49:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 05 09:49:38 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 05 09:49:38 compute-0 ceph-mon[74418]: Regenerating cephadm self-signed grafana TLS certificates
Dec 05 09:49:38 compute-0 ceph-mon[74418]: 10.8 deep-scrub starts
Dec 05 09:49:38 compute-0 ceph-mon[74418]: 10.8 deep-scrub ok
Dec 05 09:49:38 compute-0 ceph-mon[74418]: 8.12 scrub starts
Dec 05 09:49:38 compute-0 ceph-mon[74418]: 8.12 scrub ok
Dec 05 09:49:38 compute-0 ceph-mon[74418]: 10.3 scrub starts
Dec 05 09:49:38 compute-0 ceph-mon[74418]: 10.3 scrub ok
Dec 05 09:49:38 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:38 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 05 09:49:38 compute-0 ceph-mon[74418]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 05 09:49:38 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 05 09:49:38 compute-0 ceph-mon[74418]: osdmap e82: 3 total, 3 up, 3 in
Dec 05 09:49:38 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:38 compute-0 ceph-mon[74418]: Deploying daemon grafana.compute-0 on compute-0
Dec 05 09:49:38 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 05 09:49:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 05 09:49:38 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 05 09:49:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 05 09:49:38 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 83 pg[9.9( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=83) [2]/[1] r=0 lpr=83 pi=[55,83)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 83 pg[9.a( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=83 pruub=9.106807709s) [0] r=-1 lpr=83 pi=[55,83)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 250.102249146s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 83 pg[9.a( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=83 pruub=9.106513023s) [0] r=-1 lpr=83 pi=[55,83)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.102249146s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 83 pg[9.1a( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=83 pruub=9.106238365s) [0] r=-1 lpr=83 pi=[55,83)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 250.102386475s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 83 pg[9.1a( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=83 pruub=9.106189728s) [0] r=-1 lpr=83 pi=[55,83)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.102386475s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 83 pg[9.19( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=83) [2]/[1] r=0 lpr=83 pi=[55,83)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 83 pg[9.19( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=83) [2]/[1] r=0 lpr=83 pi=[55,83)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 83 pg[9.9( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=83) [2]/[1] r=0 lpr=83 pi=[55,83)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[98670]: ts=2025-12-05T09:49:38.532Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000233917s
Dec 05 09:49:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:38 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c002010 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:39 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.c scrub starts
Dec 05 09:49:39 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.c scrub ok
Dec 05 09:49:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:39 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 05 09:49:39 compute-0 ceph-mon[74418]: 11.1c scrub starts
Dec 05 09:49:39 compute-0 ceph-mon[74418]: 12.b scrub starts
Dec 05 09:49:39 compute-0 ceph-mon[74418]: 11.1c scrub ok
Dec 05 09:49:39 compute-0 ceph-mon[74418]: 12.b scrub ok
Dec 05 09:49:39 compute-0 ceph-mon[74418]: pgmap v91: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 23 op/s; 36 B/s, 1 objects/s recovering
Dec 05 09:49:39 compute-0 ceph-mon[74418]: 12.7 scrub starts
Dec 05 09:49:39 compute-0 ceph-mon[74418]: 12.7 scrub ok
Dec 05 09:49:39 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 05 09:49:39 compute-0 ceph-mon[74418]: osdmap e83: 3 total, 3 up, 3 in
Dec 05 09:49:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 05 09:49:39 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 05 09:49:39 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 84 pg[9.1a( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=84) [0]/[1] r=0 lpr=84 pi=[55,84)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:39 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 84 pg[9.1a( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=84) [0]/[1] r=0 lpr=84 pi=[55,84)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:39 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 84 pg[9.a( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=84) [0]/[1] r=0 lpr=84 pi=[55,84)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:39 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 84 pg[9.a( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=84) [0]/[1] r=0 lpr=84 pi=[55,84)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:39 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 84 pg[9.9( v 52'1029 (0'0,52'1029] local-lis/les=83/84 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[55,83)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:39 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 84 pg[9.19( v 52'1029 (0'0,52'1029] local-lis/les=83/84 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[55,83)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:40 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c004050 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:40 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.6 deep-scrub starts
Dec 05 09:49:40 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.6 deep-scrub ok
Dec 05 09:49:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 20 op/s; 10 B/s, 0 objects/s recovering
Dec 05 09:49:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 05 09:49:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 05 09:49:40 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 24 completed events
Dec 05 09:49:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:49:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:40 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 05 09:49:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:40 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event bdde75bd-ea53-45f6-8364-52ebbbbb5104 (Global Recovery Event) in 30 seconds
Dec 05 09:49:41 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 05 09:49:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 05 09:49:41 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 05 09:49:41 compute-0 ceph-mon[74418]: 8.19 scrub starts
Dec 05 09:49:41 compute-0 ceph-mon[74418]: 8.19 scrub ok
Dec 05 09:49:41 compute-0 ceph-mon[74418]: 12.c scrub starts
Dec 05 09:49:41 compute-0 ceph-mon[74418]: 12.c scrub ok
Dec 05 09:49:41 compute-0 ceph-mon[74418]: 8.f scrub starts
Dec 05 09:49:41 compute-0 ceph-mon[74418]: 8.f scrub ok
Dec 05 09:49:41 compute-0 ceph-mon[74418]: osdmap e84: 3 total, 3 up, 3 in
Dec 05 09:49:41 compute-0 ceph-mon[74418]: 12.6 deep-scrub starts
Dec 05 09:49:41 compute-0 ceph-mon[74418]: 12.6 deep-scrub ok
Dec 05 09:49:41 compute-0 ceph-mon[74418]: pgmap v94: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 20 op/s; 10 B/s, 0 objects/s recovering
Dec 05 09:49:41 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 05 09:49:41 compute-0 ceph-mon[74418]: 9.1d scrub starts
Dec 05 09:49:41 compute-0 ceph-mon[74418]: 9.1d scrub ok
Dec 05 09:49:41 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 85 pg[9.19( v 52'1029 (0'0,52'1029] local-lis/les=83/84 n=5 ec=55/40 lis/c=83/55 les/c/f=84/57/0 sis=85 pruub=14.520471573s) [2] async=[2] r=-1 lpr=85 pi=[55,85)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 258.155273438s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:41 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 85 pg[9.9( v 52'1029 (0'0,52'1029] local-lis/les=83/84 n=6 ec=55/40 lis/c=83/55 les/c/f=84/57/0 sis=85 pruub=14.516498566s) [2] async=[2] r=-1 lpr=85 pi=[55,85)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 258.151367188s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:41 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 85 pg[9.19( v 52'1029 (0'0,52'1029] local-lis/les=83/84 n=5 ec=55/40 lis/c=83/55 les/c/f=84/57/0 sis=85 pruub=14.520376205s) [2] r=-1 lpr=85 pi=[55,85)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 258.155273438s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:41 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 85 pg[9.9( v 52'1029 (0'0,52'1029] local-lis/les=83/84 n=6 ec=55/40 lis/c=83/55 les/c/f=84/57/0 sis=85 pruub=14.516395569s) [2] r=-1 lpr=85 pi=[55,85)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 258.151367188s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:41 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 05 09:49:41 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 05 09:49:41 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 85 pg[9.a( v 52'1029 (0'0,52'1029] local-lis/les=84/85 n=6 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=84) [0]/[1] async=[0] r=0 lpr=84 pi=[55,84)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:41 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 85 pg[9.1a( v 52'1029 (0'0,52'1029] local-lis/les=84/85 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=84) [0]/[1] async=[0] r=0 lpr=84 pi=[55,84)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:41 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:49:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:41 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:49:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:41 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c002010 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 05 09:49:42 compute-0 ceph-mon[74418]: 9.6 deep-scrub starts
Dec 05 09:49:42 compute-0 ceph-mon[74418]: 9.6 deep-scrub ok
Dec 05 09:49:42 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:42 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 05 09:49:42 compute-0 ceph-mon[74418]: 9.1e scrub starts
Dec 05 09:49:42 compute-0 ceph-mon[74418]: 10.13 scrub starts
Dec 05 09:49:42 compute-0 ceph-mon[74418]: 10.13 scrub ok
Dec 05 09:49:42 compute-0 ceph-mon[74418]: osdmap e85: 3 total, 3 up, 3 in
Dec 05 09:49:42 compute-0 ceph-mon[74418]: 9.1e scrub ok
Dec 05 09:49:42 compute-0 ceph-mon[74418]: 9.1f scrub starts
Dec 05 09:49:42 compute-0 ceph-mon[74418]: 9.1f scrub ok
Dec 05 09:49:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 05 09:49:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:42 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:42 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Dec 05 09:49:42 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Dec 05 09:49:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4 B/s, 0 objects/s recovering
Dec 05 09:49:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:42 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88002000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:42 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 05 09:49:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 05 09:49:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 05 09:49:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:49:42 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 86 pg[9.a( v 52'1029 (0'0,52'1029] local-lis/les=84/85 n=6 ec=55/40 lis/c=84/55 les/c/f=85/57/0 sis=86 pruub=14.352798462s) [0] async=[0] r=-1 lpr=86 pi=[55,86)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 259.648803711s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:42 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 86 pg[9.a( v 52'1029 (0'0,52'1029] local-lis/les=84/85 n=6 ec=55/40 lis/c=84/55 les/c/f=85/57/0 sis=86 pruub=14.352709770s) [0] r=-1 lpr=86 pi=[55,86)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.648803711s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:42 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 86 pg[9.1a( v 52'1029 (0'0,52'1029] local-lis/les=84/85 n=5 ec=55/40 lis/c=84/55 les/c/f=85/57/0 sis=86 pruub=14.356117249s) [0] async=[0] r=-1 lpr=86 pi=[55,86)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 259.652709961s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:42 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 86 pg[9.1a( v 52'1029 (0'0,52'1029] local-lis/les=84/85 n=5 ec=55/40 lis/c=84/55 les/c/f=85/57/0 sis=86 pruub=14.356018066s) [0] r=-1 lpr=86 pi=[55,86)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.652709961s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 05 09:49:43 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 05 09:49:43 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 05 09:49:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:43 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:43 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 05 09:49:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 05 09:49:43 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 05 09:49:43 compute-0 ceph-mon[74418]: 12.12 scrub starts
Dec 05 09:49:43 compute-0 ceph-mon[74418]: 12.12 scrub ok
Dec 05 09:49:43 compute-0 ceph-mon[74418]: pgmap v96: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4 B/s, 0 objects/s recovering
Dec 05 09:49:43 compute-0 ceph-mon[74418]: 9.d scrub starts
Dec 05 09:49:43 compute-0 ceph-mon[74418]: 9.d scrub ok
Dec 05 09:49:43 compute-0 ceph-mon[74418]: osdmap e86: 3 total, 3 up, 3 in
Dec 05 09:49:43 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 05 09:49:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:44 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:44 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Dec 05 09:49:44 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Dec 05 09:49:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:44 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 09:49:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4 B/s, 0 objects/s recovering
Dec 05 09:49:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 05 09:49:44 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 05 09:49:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:44 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 05 09:49:44 compute-0 ceph-mon[74418]: 10.18 scrub starts
Dec 05 09:49:44 compute-0 ceph-mon[74418]: 10.18 scrub ok
Dec 05 09:49:44 compute-0 ceph-mon[74418]: 9.18 scrub starts
Dec 05 09:49:44 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 05 09:49:44 compute-0 ceph-mon[74418]: osdmap e87: 3 total, 3 up, 3 in
Dec 05 09:49:44 compute-0 ceph-mon[74418]: 9.18 scrub ok
Dec 05 09:49:44 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 05 09:49:44 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:44.752741) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928184752896, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7992, "num_deletes": 252, "total_data_size": 14045209, "memory_usage": 14629920, "flush_reason": "Manual Compaction"}
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 05 09:49:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 05 09:49:44 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928184978997, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12015309, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 153, "largest_seqno": 8136, "table_properties": {"data_size": 11985676, "index_size": 18830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9733, "raw_key_size": 94352, "raw_average_key_size": 24, "raw_value_size": 11912029, "raw_average_value_size": 3079, "num_data_blocks": 831, "num_entries": 3868, "num_filter_entries": 3868, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927806, "oldest_key_time": 1764927806, "file_creation_time": 1764928184, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 226310 microseconds, and 120253 cpu microseconds.
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:44.979064) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12015309 bytes OK
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:44.979090) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:44.985706) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:44.985744) EVENT_LOG_v1 {"time_micros": 1764928184985738, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:44.985765) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14008453, prev total WAL file size 14008494, number of live WAL files 2.
Dec 05 09:49:44 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:44.988455) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(58KB) 8(1944B)]
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928184988621, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12077009, "oldest_snapshot_seqno": -1}
Dec 05 09:49:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:45 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:45 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 05 09:49:45 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3686 keys, 12029883 bytes, temperature: kUnknown
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928185218763, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12029883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12000748, "index_size": 18849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 92472, "raw_average_key_size": 25, "raw_value_size": 11928737, "raw_average_value_size": 3236, "num_data_blocks": 833, "num_entries": 3686, "num_filter_entries": 3686, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764928184, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:45.219302) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12029883 bytes
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:45.222996) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 52.4 rd, 52.2 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.5, 0.0 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3980, records dropped: 294 output_compression: NoCompression
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:45.223020) EVENT_LOG_v1 {"time_micros": 1764928185223009, "job": 4, "event": "compaction_finished", "compaction_time_micros": 230338, "compaction_time_cpu_micros": 126611, "output_level": 6, "num_output_files": 1, "total_output_size": 12029883, "num_input_records": 3980, "num_output_records": 3686, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928185225964, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928185226043, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928185226120, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 05 09:49:45 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:44.988180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:49:45 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 25 completed events
Dec 05 09:49:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:49:45 compute-0 ceph-mon[74418]: 12.8 scrub starts
Dec 05 09:49:45 compute-0 ceph-mon[74418]: 12.8 scrub ok
Dec 05 09:49:45 compute-0 ceph-mon[74418]: 9.a scrub starts
Dec 05 09:49:45 compute-0 ceph-mon[74418]: 9.a scrub ok
Dec 05 09:49:45 compute-0 ceph-mon[74418]: pgmap v99: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4 B/s, 0 objects/s recovering
Dec 05 09:49:45 compute-0 ceph-mon[74418]: 9.9 scrub starts
Dec 05 09:49:45 compute-0 ceph-mon[74418]: 9.9 scrub ok
Dec 05 09:49:45 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 05 09:49:45 compute-0 ceph-mon[74418]: osdmap e88: 3 total, 3 up, 3 in
Dec 05 09:49:45 compute-0 ceph-mon[74418]: 10.15 scrub starts
Dec 05 09:49:45 compute-0 ceph-mon[74418]: 10.15 scrub ok
Dec 05 09:49:45 compute-0 ceph-mon[74418]: 9.19 deep-scrub starts
Dec 05 09:49:45 compute-0 ceph-mon[74418]: 9.19 deep-scrub ok
Dec 05 09:49:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 05 09:49:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:45 compute-0 ceph-mgr[74711]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Dec 05 09:49:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 05 09:49:45 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 05 09:49:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:46 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:46 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Dec 05 09:49:46 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:49:46
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', '.nfs', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'backups', 'vms', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.control']
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 05 09:49:46 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:49:46 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:49:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:46 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[98670]: ts=2025-12-05T09:49:46.634Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.102089054s
Dec 05 09:49:47 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 05 09:49:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:47 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v103: 353 pgs: 353 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 05 09:49:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 05 09:49:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 05 09:49:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:49 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c000ea0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:49 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.0 deep-scrub starts
Dec 05 09:49:49 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 05 09:49:49 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.0 deep-scrub ok
Dec 05 09:49:49 compute-0 ceph-mon[74418]: 9.1a scrub starts
Dec 05 09:49:49 compute-0 ceph-mon[74418]: 9.1a scrub ok
Dec 05 09:49:49 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:49 compute-0 ceph-mon[74418]: osdmap e89: 3 total, 3 up, 3 in
Dec 05 09:49:49 compute-0 ceph-mon[74418]: 12.10 scrub starts
Dec 05 09:49:49 compute-0 ceph-mon[74418]: 12.10 scrub ok
Dec 05 09:49:49 compute-0 ceph-mon[74418]: pgmap v102: 353 pgs: 353 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:49 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 05 09:49:49 compute-0 ceph-mon[74418]: 9.1b scrub starts
Dec 05 09:49:49 compute-0 ceph-mon[74418]: 9.1b scrub ok
Dec 05 09:49:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/094949 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:49:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 05 09:49:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 05 09:49:49 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 05 09:49:49 compute-0 podman[98783]: 2025-12-05 09:49:49.520701155 +0000 UTC m=+11.514678281 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 05 09:49:49 compute-0 podman[98783]: 2025-12-05 09:49:49.549494712 +0000 UTC m=+11.543471818 container create e4627107d3e4db312ae59de773cde3791cf254c245b6a4c31a5c04afb9cdb7ee (image=quay.io/ceph/grafana:10.4.0, name=focused_herschel, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:49 compute-0 systemd[92054]: Starting Mark boot as successful...
Dec 05 09:49:49 compute-0 systemd[92054]: Finished Mark boot as successful.
Dec 05 09:49:49 compute-0 systemd[1]: Started libpod-conmon-e4627107d3e4db312ae59de773cde3791cf254c245b6a4c31a5c04afb9cdb7ee.scope.
Dec 05 09:49:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:49:49 compute-0 podman[98783]: 2025-12-05 09:49:49.620360917 +0000 UTC m=+11.614338053 container init e4627107d3e4db312ae59de773cde3791cf254c245b6a4c31a5c04afb9cdb7ee (image=quay.io/ceph/grafana:10.4.0, name=focused_herschel, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:49 compute-0 podman[98783]: 2025-12-05 09:49:49.62859138 +0000 UTC m=+11.622568486 container start e4627107d3e4db312ae59de773cde3791cf254c245b6a4c31a5c04afb9cdb7ee (image=quay.io/ceph/grafana:10.4.0, name=focused_herschel, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:49 compute-0 focused_herschel[99008]: 472 0
Dec 05 09:49:49 compute-0 podman[98783]: 2025-12-05 09:49:49.633640411 +0000 UTC m=+11.627617547 container attach e4627107d3e4db312ae59de773cde3791cf254c245b6a4c31a5c04afb9cdb7ee (image=quay.io/ceph/grafana:10.4.0, name=focused_herschel, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:49 compute-0 conmon[99008]: conmon e4627107d3e4db312ae5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4627107d3e4db312ae59de773cde3791cf254c245b6a4c31a5c04afb9cdb7ee.scope/container/memory.events
Dec 05 09:49:49 compute-0 systemd[1]: libpod-e4627107d3e4db312ae59de773cde3791cf254c245b6a4c31a5c04afb9cdb7ee.scope: Deactivated successfully.
Dec 05 09:49:49 compute-0 podman[98783]: 2025-12-05 09:49:49.635462568 +0000 UTC m=+11.629439704 container died e4627107d3e4db312ae59de773cde3791cf254c245b6a4c31a5c04afb9cdb7ee (image=quay.io/ceph/grafana:10.4.0, name=focused_herschel, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e65de6f50440d87869efd7e651c1f921bf15eafaec26affdf1aaeaf22e4f9dcd-merged.mount: Deactivated successfully.
Dec 05 09:49:49 compute-0 podman[98783]: 2025-12-05 09:49:49.792929208 +0000 UTC m=+11.786906314 container remove e4627107d3e4db312ae59de773cde3791cf254c245b6a4c31a5c04afb9cdb7ee (image=quay.io/ceph/grafana:10.4.0, name=focused_herschel, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:49 compute-0 systemd[1]: libpod-conmon-e4627107d3e4db312ae59de773cde3791cf254c245b6a4c31a5c04afb9cdb7ee.scope: Deactivated successfully.
Dec 05 09:49:49 compute-0 podman[99025]: 2025-12-05 09:49:49.877334334 +0000 UTC m=+0.054959675 container create c6038a5c2264292930777c6fcfc979325cf82c23748b7acb61245a2cc99ecbcd (image=quay.io/ceph/grafana:10.4.0, name=interesting_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:49 compute-0 systemd[1]: Started libpod-conmon-c6038a5c2264292930777c6fcfc979325cf82c23748b7acb61245a2cc99ecbcd.scope.
Dec 05 09:49:49 compute-0 podman[99025]: 2025-12-05 09:49:49.852349787 +0000 UTC m=+0.029975128 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 05 09:49:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:49:49 compute-0 podman[99025]: 2025-12-05 09:49:49.969857541 +0000 UTC m=+0.147482942 container init c6038a5c2264292930777c6fcfc979325cf82c23748b7acb61245a2cc99ecbcd (image=quay.io/ceph/grafana:10.4.0, name=interesting_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:49 compute-0 podman[99025]: 2025-12-05 09:49:49.97561233 +0000 UTC m=+0.153237651 container start c6038a5c2264292930777c6fcfc979325cf82c23748b7acb61245a2cc99ecbcd (image=quay.io/ceph/grafana:10.4.0, name=interesting_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:49 compute-0 interesting_cannon[99043]: 472 0
Dec 05 09:49:49 compute-0 systemd[1]: libpod-c6038a5c2264292930777c6fcfc979325cf82c23748b7acb61245a2cc99ecbcd.scope: Deactivated successfully.
Dec 05 09:49:49 compute-0 podman[99025]: 2025-12-05 09:49:49.979455649 +0000 UTC m=+0.157080970 container attach c6038a5c2264292930777c6fcfc979325cf82c23748b7acb61245a2cc99ecbcd (image=quay.io/ceph/grafana:10.4.0, name=interesting_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:49 compute-0 podman[99025]: 2025-12-05 09:49:49.980906897 +0000 UTC m=+0.158532238 container died c6038a5c2264292930777c6fcfc979325cf82c23748b7acb61245a2cc99ecbcd (image=quay.io/ceph/grafana:10.4.0, name=interesting_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:50 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c000ea0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:50 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.1 deep-scrub starts
Dec 05 09:49:50 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.1 deep-scrub ok
Dec 05 09:49:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 05 09:49:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 05 09:49:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1108994a9061f741020bbaeee022cb2a99d97ecc56e85ee212d59ddb8200393c-merged.mount: Deactivated successfully.
Dec 05 09:49:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:50 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c000ea0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:50 compute-0 sshd-session[99063]: Accepted publickey for zuul from 192.168.122.30 port 46854 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:49:50 compute-0 systemd-logind[789]: New session 37 of user zuul.
Dec 05 09:49:50 compute-0 systemd[1]: Started Session 37 of User zuul.
Dec 05 09:49:50 compute-0 sshd-session[99063]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:49:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:51 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:51 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 05 09:49:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 05 09:49:51 compute-0 ceph-mon[74418]: 9.c scrub starts
Dec 05 09:49:51 compute-0 ceph-mon[74418]: 9.5 scrub starts
Dec 05 09:49:51 compute-0 ceph-mon[74418]: 9.5 scrub ok
Dec 05 09:49:51 compute-0 ceph-mon[74418]: 9.7 scrub starts
Dec 05 09:49:51 compute-0 ceph-mon[74418]: pgmap v103: 353 pgs: 353 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:51 compute-0 ceph-mon[74418]: 9.7 scrub ok
Dec 05 09:49:51 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 05 09:49:51 compute-0 ceph-mon[74418]: 9.3 scrub starts
Dec 05 09:49:51 compute-0 ceph-mon[74418]: 9.0 deep-scrub starts
Dec 05 09:49:51 compute-0 ceph-mon[74418]: 9.c scrub ok
Dec 05 09:49:51 compute-0 ceph-mon[74418]: 9.3 scrub ok
Dec 05 09:49:51 compute-0 ceph-mon[74418]: 9.0 deep-scrub ok
Dec 05 09:49:51 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 05 09:49:51 compute-0 ceph-mon[74418]: osdmap e90: 3 total, 3 up, 3 in
Dec 05 09:49:51 compute-0 ceph-mon[74418]: pgmap v105: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:51 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 05 09:49:51 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 05 09:49:51 compute-0 podman[99048]: 2025-12-05 09:49:51.518183759 +0000 UTC m=+1.520120378 container remove c6038a5c2264292930777c6fcfc979325cf82c23748b7acb61245a2cc99ecbcd (image=quay.io/ceph/grafana:10.4.0, name=interesting_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:49:51 compute-0 systemd[1]: libpod-conmon-c6038a5c2264292930777c6fcfc979325cf82c23748b7acb61245a2cc99ecbcd.scope: Deactivated successfully.
Dec 05 09:49:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 05 09:49:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 05 09:49:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:49:51 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 09:49:51 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 05 09:49:51 compute-0 systemd[1]: Reloading.
Dec 05 09:49:51 compute-0 systemd-rc-local-generator[99216]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:49:51 compute-0 systemd-sysv-generator[99219]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:49:51 compute-0 systemd[1]: Reloading.
Dec 05 09:49:52 compute-0 systemd-rc-local-generator[99281]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:49:52 compute-0 systemd-sysv-generator[99288]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:52 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:52 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 05 09:49:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec 05 09:49:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 05 09:49:52 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 05 09:49:52 compute-0 python3.9[99258]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:49:52 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:49:52 compute-0 ceph-mon[74418]: 9.8 scrub starts
Dec 05 09:49:52 compute-0 ceph-mon[74418]: 9.1 deep-scrub starts
Dec 05 09:49:52 compute-0 ceph-mon[74418]: 9.8 scrub ok
Dec 05 09:49:52 compute-0 ceph-mon[74418]: 9.1 deep-scrub ok
Dec 05 09:49:52 compute-0 ceph-mon[74418]: 9.4 scrub starts
Dec 05 09:49:52 compute-0 ceph-mon[74418]: 9.f scrub starts
Dec 05 09:49:52 compute-0 ceph-mon[74418]: 9.f scrub ok
Dec 05 09:49:52 compute-0 ceph-mon[74418]: 9.4 scrub ok
Dec 05 09:49:52 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 05 09:49:52 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 05 09:49:52 compute-0 ceph-mon[74418]: osdmap e91: 3 total, 3 up, 3 in
Dec 05 09:49:52 compute-0 ceph-mon[74418]: pgmap v107: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:52 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 05 09:49:52 compute-0 podman[99353]: 2025-12-05 09:49:52.509460668 +0000 UTC m=+0.071871472 container create bfc89c7b51db319a90bd517ef6d4861794d073950d7be4a9d66708be3b568f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 05 09:49:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 05 09:49:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 05 09:49:52 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 05 09:49:52 compute-0 podman[99353]: 2025-12-05 09:49:52.479906173 +0000 UTC m=+0.042317067 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 05 09:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04e1d0fd365ee990e4838604021bd5bd072e7f44f32bbfc4b7ca69751bd7682/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04e1d0fd365ee990e4838604021bd5bd072e7f44f32bbfc4b7ca69751bd7682/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04e1d0fd365ee990e4838604021bd5bd072e7f44f32bbfc4b7ca69751bd7682/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04e1d0fd365ee990e4838604021bd5bd072e7f44f32bbfc4b7ca69751bd7682/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04e1d0fd365ee990e4838604021bd5bd072e7f44f32bbfc4b7ca69751bd7682/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:52 compute-0 podman[99353]: 2025-12-05 09:49:52.576337591 +0000 UTC m=+0.138748415 container init bfc89c7b51db319a90bd517ef6d4861794d073950d7be4a9d66708be3b568f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:52 compute-0 podman[99353]: 2025-12-05 09:49:52.586175487 +0000 UTC m=+0.148586291 container start bfc89c7b51db319a90bd517ef6d4861794d073950d7be4a9d66708be3b568f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:49:52 compute-0 bash[99353]: bfc89c7b51db319a90bd517ef6d4861794d073950d7be4a9d66708be3b568f21
Dec 05 09:49:52 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:52 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:52 compute-0 sudo[98717]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:49:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:52 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 92 pg[9.10( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=2 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=92 pruub=10.867936134s) [0] r=-1 lpr=92 pi=[55,92)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 266.108825684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:52 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 92 pg[9.10( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=2 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=92 pruub=10.867881775s) [0] r=-1 lpr=92 pi=[55,92)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 266.108825684s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:49:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 05 09:49:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:52 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 1dc43ed1-ca1e-4fa8-a5b2-050f17811a60 (Updating grafana deployment (+1 -> 1))
Dec 05 09:49:52 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 1dc43ed1-ca1e-4fa8-a5b2-050f17811a60 (Updating grafana deployment (+1 -> 1)) in 16 seconds
Dec 05 09:49:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 05 09:49:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:52 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev f91c231f-bb22-4de4-94fb-e77a985fe922 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec 05 09:49:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Dec 05 09:49:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:52 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.fnpxdf on compute-0
Dec 05 09:49:52 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.fnpxdf on compute-0
Dec 05 09:49:52 compute-0 sudo[99400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.808910626Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-05T09:49:52Z
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809789068Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809802309Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec 05 09:49:52 compute-0 sudo[99400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809807539Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809812289Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809817719Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809822559Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809827459Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809832349Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.80983749Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.80984224Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.80984696Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.8098517Z level=info msg=Target target=[all]
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.80986932Z level=info msg="Path Home" path=/usr/share/grafana
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809873991Z level=info msg="Path Data" path=/var/lib/grafana
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809878611Z level=info msg="Path Logs" path=/var/log/grafana
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809883961Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809889061Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=settings t=2025-12-05T09:49:52.809906651Z level=info msg="App mode production"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=sqlstore t=2025-12-05T09:49:52.810426236Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=sqlstore t=2025-12-05T09:49:52.8105828Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.811750749Z level=info msg="Starting DB migrations"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.814977813Z level=info msg="Executing migration" id="create migration_log table"
Dec 05 09:49:52 compute-0 sudo[99400]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.817484328Z level=info msg="Migration successfully executed" id="create migration_log table" duration=2.502685ms
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.82604197Z level=info msg="Executing migration" id="create user table"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.827392075Z level=info msg="Migration successfully executed" id="create user table" duration=1.353526ms
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.83145027Z level=info msg="Executing migration" id="add unique index user.login"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.832441136Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=988.806µs
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.835832464Z level=info msg="Executing migration" id="add unique index user.email"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.836736167Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=908.784µs
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.839292753Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.840011171Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=719.238µs
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.842392683Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.843014829Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=619.746µs
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.848879871Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.85192717Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.045149ms
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.854574819Z level=info msg="Executing migration" id="create user table v2"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.85576622Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.191612ms
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.858779528Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.859474006Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=696.428µs
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.862989317Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.863526611Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=537.065µs
Dec 05 09:49:52 compute-0 sudo[99439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:49:52 compute-0 sudo[99439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.873284844Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.873825517Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=543.924µs
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.87623952Z level=info msg="Executing migration" id="Drop old table user_v1"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.876824115Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=587.835µs
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.880008287Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.881791504Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.782997ms
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.983126019Z level=info msg="Executing migration" id="Update user table charset"
Dec 05 09:49:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:52.983214881Z level=info msg="Migration successfully executed" id="Update user table charset" duration=95.692µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.017923121Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.020033435Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=2.137195ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.022447417Z level=info msg="Executing migration" id="Add missing user data"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.022760845Z level=info msg="Migration successfully executed" id="Add missing user data" duration=313.318µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.024993633Z level=info msg="Executing migration" id="Add is_disabled column to user"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.026254097Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.260234ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.028782762Z level=info msg="Executing migration" id="Add index user.login/user.email"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.029573942Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=790.88µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.032348004Z level=info msg="Executing migration" id="Add is_service_account column to user"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.033622107Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.274043ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.035634789Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.045075903Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.437764ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.047866196Z level=info msg="Executing migration" id="Add uid column to user"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.049907999Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.040153ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.052145347Z level=info msg="Executing migration" id="Update uid column values for users"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.052510346Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=335.929µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.055150005Z level=info msg="Executing migration" id="Add unique index user_uid"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.05611663Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=966.775µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.059644482Z level=info msg="Executing migration" id="create temp user table v1-7"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.06073841Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.096838ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.063636825Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.064430085Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=793.19µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.068034089Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.069351413Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.324584ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.072651148Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.073355557Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=705.889µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.0781349Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.079729492Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.608662ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.083446928Z level=info msg="Executing migration" id="Update temp_user table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.08353078Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=83.482µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.088084278Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.089224287Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.135029ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.092226805Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.093575251Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.346505ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.104547075Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.106214377Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.673242ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.109547774Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.11015011Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=601.576µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:53 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c0022e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.11440574Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.117043908Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.638158ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.119571124Z level=info msg="Executing migration" id="create temp_user v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.1201989Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=627.506µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.123413093Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.126839142Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=3.408919ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.132835828Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.133794122Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=958.584µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.138236618Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.139163621Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=931.074µs
Dec 05 09:49:53 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.141871021Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.142780425Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=909.004µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.148102923Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.148690268Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=587.625µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.152117357Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.152973299Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=856.422µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.155682039Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.156220993Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=539.514µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.158874412Z level=info msg="Executing migration" id="create star table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.159716673Z level=info msg="Migration successfully executed" id="create star table" duration=846.091µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.162031294Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.162943777Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=912.084µs
Dec 05 09:49:53 compute-0 ceph-osd[82677]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.166675284Z level=info msg="Executing migration" id="create org table v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.167649999Z level=info msg="Migration successfully executed" id="create org table v1" duration=974.045µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.170947524Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.171828018Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=880.424µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.175299177Z level=info msg="Executing migration" id="create org_user table v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.176415636Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.111019ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.180184814Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.180865962Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=682.028µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.184901976Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.185852321Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=950.485µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.189206318Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.19008424Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=878.682µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.193124029Z level=info msg="Executing migration" id="Update org table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.193149659Z level=info msg="Migration successfully executed" id="Update org table charset" duration=25.95µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.195549152Z level=info msg="Executing migration" id="Update org_user table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.195564732Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=16.24µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.19738168Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.197512073Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=130.293µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.199439723Z level=info msg="Executing migration" id="create dashboard table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.20008787Z level=info msg="Migration successfully executed" id="create dashboard table" duration=647.577µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.202486172Z level=info msg="Executing migration" id="add index dashboard.account_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.203634581Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.148409ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.207389218Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.20857937Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.187202ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.211739791Z level=info msg="Executing migration" id="create dashboard_tag table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.212623394Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=886.703µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.215654992Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.216787792Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.12981ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.221043642Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.2221105Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.066758ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.226858743Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.231481313Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.62414ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.234551262Z level=info msg="Executing migration" id="create dashboard v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.235159728Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=607.646µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.24102909Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.241685238Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=655.757µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.247813266Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.24874468Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=932.094µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.252108827Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.252682162Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=573.435µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.254669863Z level=info msg="Executing migration" id="drop table dashboard_v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.255856085Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.186572ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.260687869Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.260769311Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=81.392µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.263922294Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.265959606Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.036572ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.268367558Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.269916999Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.552951ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.273425249Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.27501844Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.593141ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.276636352Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.27733014Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=693.568µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.280700458Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.282758431Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.054633ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.295529372Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.296790024Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.264992ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.302867882Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Dec 05 09:49:53 compute-0 podman[99602]: 2025-12-05 09:49:53.302580974 +0000 UTC m=+0.052052579 container create b69e3595d48cccfa23f321b459cb44e2f91aa430d285b58612484d34f45d0cdc (image=quay.io/ceph/haproxy:2.3, name=elastic_boyd)
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.303832927Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=966.845µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.306948918Z level=info msg="Executing migration" id="Update dashboard table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.30703816Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=90.622µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.31050752Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.310605302Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=98.432µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.313712823Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.320061768Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=6.338185ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.32285906Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.324523513Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.665283ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.327050308Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.328929077Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.878629ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.332856949Z level=info msg="Executing migration" id="Add column uid in dashboard"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.334397819Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.54043ms
Dec 05 09:49:53 compute-0 systemd[1]: Started libpod-conmon-b69e3595d48cccfa23f321b459cb44e2f91aa430d285b58612484d34f45d0cdc.scope.
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.339396188Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.339604334Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=205.296µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.344310745Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.344996993Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=687.208µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.350443004Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.351136543Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=694.099µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.356583954Z level=info msg="Executing migration" id="Update dashboard title length"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.356619475Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=40.031µs
Dec 05 09:49:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.364888419Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.365772032Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=886.663µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.370492234Z level=info msg="Executing migration" id="create dashboard_provisioning"
Dec 05 09:49:53 compute-0 podman[99602]: 2025-12-05 09:49:53.274314992 +0000 UTC m=+0.023786647 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.37150118Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.009446ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.376052498Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.382471734Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.414636ms
Dec 05 09:49:53 compute-0 podman[99602]: 2025-12-05 09:49:53.383636544 +0000 UTC m=+0.133108189 container init b69e3595d48cccfa23f321b459cb44e2f91aa430d285b58612484d34f45d0cdc (image=quay.io/ceph/haproxy:2.3, name=elastic_boyd)
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.385862042Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.386984351Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.123339ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.391439336Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.392174735Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=736.069µs
Dec 05 09:49:53 compute-0 podman[99602]: 2025-12-05 09:49:53.396067467 +0000 UTC m=+0.145539062 container start b69e3595d48cccfa23f321b459cb44e2f91aa430d285b58612484d34f45d0cdc (image=quay.io/ceph/haproxy:2.3, name=elastic_boyd)
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.397038952Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.397836922Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=798.471µs
Dec 05 09:49:53 compute-0 elastic_boyd[99618]: 0 0
Dec 05 09:49:53 compute-0 systemd[1]: libpod-b69e3595d48cccfa23f321b459cb44e2f91aa430d285b58612484d34f45d0cdc.scope: Deactivated successfully.
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.404208588Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Dec 05 09:49:53 compute-0 podman[99602]: 2025-12-05 09:49:53.404594438 +0000 UTC m=+0.154066043 container attach b69e3595d48cccfa23f321b459cb44e2f91aa430d285b58612484d34f45d0cdc (image=quay.io/ceph/haproxy:2.3, name=elastic_boyd)
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.404758272Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=551.174µs
Dec 05 09:49:53 compute-0 podman[99602]: 2025-12-05 09:49:53.405058199 +0000 UTC m=+0.154529834 container died b69e3595d48cccfa23f321b459cb44e2f91aa430d285b58612484d34f45d0cdc (image=quay.io/ceph/haproxy:2.3, name=elastic_boyd)
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.407555154Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.408233922Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=668.227µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.413871188Z level=info msg="Executing migration" id="Add check_sum column"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.415715346Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.845509ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.4197708Z level=info msg="Executing migration" id="Add index for dashboard_title"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.420789887Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.022297ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.424692748Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.425041737Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=355.329µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.42866054Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.429074851Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=424.201µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.432779998Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.433882576Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.107859ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.438876985Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.441080503Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.204808ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.444185273Z level=info msg="Executing migration" id="create data_source table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.445363083Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.18008ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.451877543Z level=info msg="Executing migration" id="add index data_source.account_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.452829367Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=954.755µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.456234515Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.456956303Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=726.168µs
Dec 05 09:49:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad0752d73a82a365f769702f16e46deb76a0d554c3a787385c0ace2c7777df7c-merged.mount: Deactivated successfully.
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.460966788Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.461748098Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=781.87µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.467579239Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.46836938Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=791.201µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.471519751Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.477363963Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.838661ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.480769381Z level=info msg="Executing migration" id="create data_source table v2"
Dec 05 09:49:53 compute-0 podman[99602]: 2025-12-05 09:49:53.48114855 +0000 UTC m=+0.230620185 container remove b69e3595d48cccfa23f321b459cb44e2f91aa430d285b58612484d34f45d0cdc (image=quay.io/ceph/haproxy:2.3, name=elastic_boyd)
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.482104245Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.335054ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.484336483Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.485154505Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=817.032µs
Dec 05 09:49:53 compute-0 systemd[1]: libpod-conmon-b69e3595d48cccfa23f321b459cb44e2f91aa430d285b58612484d34f45d0cdc.scope: Deactivated successfully.
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.49075093Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.492146625Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.408687ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.496797186Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.497658948Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=864.092µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.501879137Z level=info msg="Executing migration" id="Add column with_credentials"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.503870949Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.991662ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.506900367Z level=info msg="Executing migration" id="Add secure json data column"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.509136385Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.233888ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.512941944Z level=info msg="Executing migration" id="Update data_source table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.512975725Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=35.39µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.516299951Z level=info msg="Executing migration" id="Update initial version to 1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.516699022Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=448.432µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.519581826Z level=info msg="Executing migration" id="Add read_only data column"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.52282175Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.238494ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.52709498Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.527381359Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=289.348µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.533490787Z level=info msg="Executing migration" id="Update json_data with nulls"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.533737803Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=249.356µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.537072879Z level=info msg="Executing migration" id="Add uid column"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.539634956Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.562337ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.542555741Z level=info msg="Executing migration" id="Update uid value"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.542744736Z level=info msg="Migration successfully executed" id="Update uid value" duration=191.555µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.545805665Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.546560655Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=755.409µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.548524166Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.549193473Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=669.217µs
Dec 05 09:49:53 compute-0 systemd[1]: Reloading.
Dec 05 09:49:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.554068379Z level=info msg="Executing migration" id="create api_key table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.555051034Z level=info msg="Migration successfully executed" id="create api_key table" duration=985.355µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.560014813Z level=info msg="Executing migration" id="add index api_key.account_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.56067777Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=666.587µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.56411768Z level=info msg="Executing migration" id="add index api_key.key"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.564745856Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=628.526µs
Dec 05 09:49:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.567555679Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.568237646Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=681.887µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.570944436Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.571816569Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=871.683µs
Dec 05 09:49:53 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.575878565Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.577329232Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.451118ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.580961996Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.58187705Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=914.314µs
Dec 05 09:49:53 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 93 pg[9.10( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=2 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=93) [0]/[1] r=0 lpr=93 pi=[55,93)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:53 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 93 pg[9.10( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=2 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=93) [0]/[1] r=0 lpr=93 pi=[55,93)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.584880358Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.590855242Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=5.972315ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.592957707Z level=info msg="Executing migration" id="create api_key table v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.593616524Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=661.377µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.595441861Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.596052137Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=609.466µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.599217399Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.599886736Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=669.457µs
Dec 05 09:49:53 compute-0 ceph-mon[74418]: 9.13 scrub starts
Dec 05 09:49:53 compute-0 ceph-mon[74418]: 9.1c scrub starts
Dec 05 09:49:53 compute-0 ceph-mon[74418]: 9.13 scrub ok
Dec 05 09:49:53 compute-0 ceph-mon[74418]: 9.1c scrub ok
Dec 05 09:49:53 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 05 09:49:53 compute-0 ceph-mon[74418]: osdmap e92: 3 total, 3 up, 3 in
Dec 05 09:49:53 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:53 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:53 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:53 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:53 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:53 compute-0 ceph-mon[74418]: Deploying daemon haproxy.rgw.default.compute-0.fnpxdf on compute-0
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.603753707Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.60464981Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=897.453µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.61354692Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.614110935Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=569.335µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.616475646Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.617179025Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=706.139µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.621296041Z level=info msg="Executing migration" id="Update api_key table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.621338752Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=46.431µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.623786816Z level=info msg="Executing migration" id="Add expires to api_key table"
Dec 05 09:49:53 compute-0 systemd-rc-local-generator[99715]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:49:53 compute-0 systemd-sysv-generator[99718]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.627290416Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.502801ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.63395931Z level=info msg="Executing migration" id="Add service account foreign key"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.635890079Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.930559ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.63784165Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.638018564Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=177.054µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.640110399Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.642170782Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.057993ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.644897813Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.646725709Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.830016ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.648486566Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.649079371Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=592.696µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.65135062Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.651850382Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=499.392µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.653622719Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.654296086Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=671.817µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.656538214Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.65716811Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=629.516µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.659609974Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.660278391Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=644.376µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.664185652Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.66487119Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=685.278µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.667285793Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.667340294Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=54.831µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.670150687Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.670169417Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=19.16µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.671960763Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.673960675Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=1.999572ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.675933727Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.677908697Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.97487ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.680674379Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.68071921Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=45.201µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.683381849Z level=info msg="Executing migration" id="create quota table v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.683915683Z level=info msg="Migration successfully executed" id="create quota table v1" duration=533.504µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.686041688Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.686661254Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=619.136µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.689132578Z level=info msg="Executing migration" id="Update quota table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.689156719Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=24.591µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.691108219Z level=info msg="Executing migration" id="create plugin_setting table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.691774406Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=665.237µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.694320552Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.694946548Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=626.056µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.697498755Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.699574709Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.076364ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.701667083Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.701691524Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=25.001µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.705086602Z level=info msg="Executing migration" id="create session table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.705840521Z level=info msg="Migration successfully executed" id="create session table" duration=753.63µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.709542467Z level=info msg="Executing migration" id="Drop old table playlist table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.70962794Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=86.243µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.711921188Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.71199524Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=74.842µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.714361561Z level=info msg="Executing migration" id="create playlist table v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.714968038Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=602.577µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.719010492Z level=info msg="Executing migration" id="create playlist item table v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.719945096Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=933.654µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.722438851Z level=info msg="Executing migration" id="Update playlist table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.722461181Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=23.51µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.724606127Z level=info msg="Executing migration" id="Update playlist_item table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.724647258Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=44.321µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.726764323Z level=info msg="Executing migration" id="Add playlist column created_at"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.729306789Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.541936ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.732925173Z level=info msg="Executing migration" id="Add playlist column updated_at"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.735406087Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.481134ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.738547358Z level=info msg="Executing migration" id="drop preferences table v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.73862806Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=81.472µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.740435748Z level=info msg="Executing migration" id="drop preferences table v3"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.740507539Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=72.501µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.74399929Z level=info msg="Executing migration" id="create preferences table v3"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.744688597Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=689.047µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.748580529Z level=info msg="Executing migration" id="Update preferences table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.748602949Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=26.87µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.750882888Z level=info msg="Executing migration" id="Add column team_id in preferences"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.753360452Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.476545ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.755151629Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.755293682Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=142.803µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.757338446Z level=info msg="Executing migration" id="Add column week_start in preferences"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.759787129Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.447744ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.761599365Z level=info msg="Executing migration" id="Add column preferences.json_data"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.764145061Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.547326ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.766522903Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.766572404Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=50.411µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.769064469Z level=info msg="Executing migration" id="Add preferences index org_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.769908331Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=838.761µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.77259282Z level=info msg="Executing migration" id="Add preferences index user_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.773312579Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=719.079µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.778231766Z level=info msg="Executing migration" id="create alert table v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.779121499Z level=info msg="Migration successfully executed" id="create alert table v1" duration=890.343µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.781186983Z level=info msg="Executing migration" id="add index alert org_id & id "
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.781975153Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=787.73µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.784541539Z level=info msg="Executing migration" id="add index alert state"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.785238538Z level=info msg="Migration successfully executed" id="add index alert state" duration=696.349µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.787605259Z level=info msg="Executing migration" id="add index alert dashboard_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.788481912Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=876.413µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.790959806Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.791542221Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=582.715µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.794033676Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.794875037Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=841.751µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.798576683Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.799768284Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.188201ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.802539536Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.813424418Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.880942ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.815680296Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.81735325Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.673354ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.819884465Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.821031735Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.14687ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.823603221Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.824015532Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=412.861µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.826313652Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.827138034Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=824.292µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.830149972Z level=info msg="Executing migration" id="create alert_notification table v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.831143597Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=993.635µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.833984861Z level=info msg="Executing migration" id="Add column is_default"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.836625089Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.640448ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.839539475Z level=info msg="Executing migration" id="Add column frequency"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.843511327Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.962513ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.846156816Z level=info msg="Executing migration" id="Add column send_reminder"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.849327939Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.171093ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.852720726Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.856176116Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.451259ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.858522127Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.85943653Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=914.494µs
Dec 05 09:49:53 compute-0 systemd[1]: Reloading.
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.862568001Z level=info msg="Executing migration" id="Update alert table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.862744055Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=177.574µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.865033665Z level=info msg="Executing migration" id="Update alert_notification table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.865118437Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=85.972µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.868564116Z level=info msg="Executing migration" id="create notification_journal table v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.869605914Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.040778ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.872977461Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.873776991Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=799.39µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.877241601Z level=info msg="Executing migration" id="drop alert_notification_journal"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.878166105Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=912.644µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.881284686Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.882105977Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=821.061µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.886385728Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.887397334Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=945.424µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.894571861Z level=info msg="Executing migration" id="Add for to alert table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.900403232Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.833031ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.902744852Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.908666006Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=5.914853ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.911774056Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.912098664Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=325.258µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.916225511Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.917821773Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.597411ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.921376204Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.923019267Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.644873ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.925581634Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Dec 05 09:49:53 compute-0 systemd-sysv-generator[99786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.935346126Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=9.758092ms
Dec 05 09:49:53 compute-0 systemd-rc-local-generator[99780]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.938353574Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.938441126Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=89.932µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.940736226Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.942100932Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.364106ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.94471178Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.945975792Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.260563ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.950039357Z level=info msg="Executing migration" id="Drop old annotation table v4"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.950186321Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=147.044µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.953638661Z level=info msg="Executing migration" id="create annotation table v5"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.95478654Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.147069ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.958076105Z level=info msg="Executing migration" id="add index annotation 0 v3"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.959316648Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.243423ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.964186174Z level=info msg="Executing migration" id="add index annotation 1 v3"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.965341614Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.15636ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.968074284Z level=info msg="Executing migration" id="add index annotation 2 v3"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.969053789Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=980.895µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.974946562Z level=info msg="Executing migration" id="add index annotation 3 v3"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.975936868Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=991.206µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.980299311Z level=info msg="Executing migration" id="add index annotation 4 v3"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.981202585Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=903.234µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.984411338Z level=info msg="Executing migration" id="Update annotation table charset"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.984438868Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=28.7µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.988436702Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.99187245Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.435428ms
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.994638502Z level=info msg="Executing migration" id="Drop category_id index"
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.995555117Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=917.224µs
Dec 05 09:49:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:53.997310951Z level=info msg="Executing migration" id="Add column tags to annotation table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.000100344Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.789283ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.002516516Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.003077801Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=576.265µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.005683378Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.006453499Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=769.681µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.008788029Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.00959091Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=802.101µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.011700294Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.019616089Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=7.915325ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.021631742Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.02235814Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=726.508µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.025839501Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.026712203Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=872.362µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.030862351Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.03122706Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=368.42µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.033309714Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.033972521Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=662.837µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.036295291Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.036442705Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=194.685µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.040027519Z level=info msg="Executing migration" id="Add created time to annotation table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.043498878Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.469799ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.045540982Z level=info msg="Executing migration" id="Add updated time to annotation table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.048795096Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.252984ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.051977268Z level=info msg="Executing migration" id="Add index for created in annotation table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.053043625Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.067647ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.055706514Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.056557956Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=852.142µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.059208565Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.059489203Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=285.888µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.06171929Z level=info msg="Executing migration" id="Add epoch_end column"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.067610162Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=5.887372ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:54 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.070670243Z level=info msg="Executing migration" id="Add index for epoch_end"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.072191521Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.517999ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.075934138Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.076266757Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=338.009µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.078392202Z level=info msg="Executing migration" id="Move region to single row"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.078933986Z level=info msg="Migration successfully executed" id="Move region to single row" duration=542.714µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.081052551Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.081979085Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=926.814µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.084612833Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.085693421Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.080258ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.089443438Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.09063424Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.191352ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.095673289Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.096883121Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.209892ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.101698826Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.102800474Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.102208ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.105135145Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.105964626Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=830.161µs
Dec 05 09:49:54 compute-0 sudo[99757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfznrrfmwxojrbccgcijrtpsdmxmlmpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928193.1720297-56-6607432794572/AnsiballZ_command.py'
Dec 05 09:49:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:49:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.110935315Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.11110824Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=183.055µs
Dec 05 09:49:54 compute-0 sudo[99757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.115707129Z level=info msg="Executing migration" id="create test_data table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.116574001Z level=info msg="Migration successfully executed" id="create test_data table" duration=864.782µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.12038662Z level=info msg="Executing migration" id="create dashboard_version table v1"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.121197221Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=811.881µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.123541941Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.124896147Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.354176ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.127237738Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.128069679Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=829.261µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.131050296Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.131303283Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=252.147µs
Dec 05 09:49:54 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.fnpxdf for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.133818678Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.134440684Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=620.576µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.137191055Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.13737421Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=183.725µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.139650959Z level=info msg="Executing migration" id="create team table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.140406748Z level=info msg="Migration successfully executed" id="create team table" duration=756.589µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.144687659Z level=info msg="Executing migration" id="add index team.org_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.145666015Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=983.775µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.148394705Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.149199017Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=803.812µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.151807353Z level=info msg="Executing migration" id="Add column uid in team"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.155243703Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.43588ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.158644301Z level=info msg="Executing migration" id="Update uid column values in team"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.158890067Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=244.016µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.161642099Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.162713376Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.071317ms
Dec 05 09:49:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 05 09:49:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.170564Z level=info msg="Executing migration" id="create team member table"
Dec 05 09:49:54 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 05 09:49:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec 05 09:49:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.175029076Z level=info msg="Migration successfully executed" id="create team member table" duration=4.461116ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.178484715Z level=info msg="Executing migration" id="add index team_member.org_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.17946916Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=985.375µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.182322404Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.183144546Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=821.812µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.186437361Z level=info msg="Executing migration" id="add index team_member.team_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.187669453Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.236242ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.19105578Z level=info msg="Executing migration" id="Add column email to team table"
Dec 05 09:49:54 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 94 pg[9.10( v 52'1029 (0'0,52'1029] local-lis/les=93/94 n=2 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=93) [0]/[1] async=[0] r=0 lpr=93 pi=[55,93)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.195800813Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.748933ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.197820976Z level=info msg="Executing migration" id="Add column external to team_member table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.202332763Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.515557ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.204489818Z level=info msg="Executing migration" id="Add column permission to team_member table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.20800284Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.512692ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.210040392Z level=info msg="Executing migration" id="create dashboard acl table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.211208383Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.171561ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.214747894Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.215624597Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=877.863µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.217868275Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.218786179Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=918.314µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.221393976Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.222132786Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=739.35µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.22422363Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.224940259Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=716.699µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.227244878Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.227986998Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=740.22µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.230227505Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.231021286Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=797.951µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.233058108Z level=info msg="Executing migration" id="add index dashboard_permission"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.233813919Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=755.891µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.236014606Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.23656368Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=549.014µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.238409187Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.238627472Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=218.725µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.240360888Z level=info msg="Executing migration" id="create tag table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.241001534Z level=info msg="Migration successfully executed" id="create tag table" duration=640.536µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.24391532Z level=info msg="Executing migration" id="add index tag.key_value"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.244653229Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=738.639µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.24700668Z level=info msg="Executing migration" id="create login attempt table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.247962645Z level=info msg="Migration successfully executed" id="create login attempt table" duration=959.686µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.250442369Z level=info msg="Executing migration" id="add index login_attempt.username"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.251419004Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=976.325µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.254825272Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.255746687Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=921.825µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.258890428Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.270036307Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=11.145689ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.271908225Z level=info msg="Executing migration" id="create login_attempt v2"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.272657185Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=748.33µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.274482802Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.275194151Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=711.559µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.279333897Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.279633615Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=299.908µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.281716299Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.282298354Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=581.865µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.285094397Z level=info msg="Executing migration" id="create user auth table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.285708762Z level=info msg="Migration successfully executed" id="create user auth table" duration=614.045µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.288566266Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.289323137Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=756.761µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.294568372Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.294660054Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=92.072µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.297808296Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.30141811Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.609604ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.303068192Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Dec 05 09:49:54 compute-0 python3.9[99795]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.306658456Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.590404ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.308790701Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.312958409Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.168038ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.315855634Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.320184226Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.327942ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.322350442Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.323355588Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.005577ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.326397647Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.33190745Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.510013ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.33772687Z level=info msg="Executing migration" id="create server_lock table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.338866999Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.140509ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.342152384Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.343616493Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.464319ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.347090483Z level=info msg="Executing migration" id="create user auth token table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.348422888Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.333865ms
Dec 05 09:49:54 compute-0 podman[99843]: 2025-12-05 09:49:54.349340741 +0000 UTC m=+0.052905552 container create e8d5804569e0a94a48882cc6e4f68778b112517abc1d9d50b6523f24504cd36b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-rgw-default-compute-0-fnpxdf)
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.351368794Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.35279855Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.429156ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.356504677Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.357934594Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.429536ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.361977759Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.363366614Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.387705ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.366342901Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.372165592Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.822841ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.375832297Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.376648949Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=817.222µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.379508523Z level=info msg="Executing migration" id="create cache_data table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.380501378Z level=info msg="Migration successfully executed" id="create cache_data table" duration=992.235µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.383900656Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Dec 05 09:49:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd901554f473f59d2d866af5a13fffe1d8529538ffa81f60acc2cc7f3cedfee5/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.39058833Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=6.687174ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.393745492Z level=info msg="Executing migration" id="create short_url table v1"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.394613304Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=868.933µs
Dec 05 09:49:54 compute-0 podman[99843]: 2025-12-05 09:49:54.39716717 +0000 UTC m=+0.100731991 container init e8d5804569e0a94a48882cc6e4f68778b112517abc1d9d50b6523f24504cd36b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-rgw-default-compute-0-fnpxdf)
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.397581371Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.398371721Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=790.171µs
Dec 05 09:49:54 compute-0 podman[99843]: 2025-12-05 09:49:54.401273447 +0000 UTC m=+0.104838258 container start e8d5804569e0a94a48882cc6e4f68778b112517abc1d9d50b6523f24504cd36b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-rgw-default-compute-0-fnpxdf)
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.401999375Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.402059787Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=62.142µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.404919541Z level=info msg="Executing migration" id="delete alert_definition table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.405012243Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=94.102µs
Dec 05 09:49:54 compute-0 bash[99843]: e8d5804569e0a94a48882cc6e4f68778b112517abc1d9d50b6523f24504cd36b
Dec 05 09:49:54 compute-0 podman[99843]: 2025-12-05 09:49:54.324567049 +0000 UTC m=+0.028131950 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.408944765Z level=info msg="Executing migration" id="recreate alert_definition table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.409864789Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=920.264µs
Dec 05 09:49:54 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.fnpxdf for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.413238047Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-rgw-default-compute-0-fnpxdf[99864]: [NOTICE] 338/094954 (2) : New worker #1 (4) forked
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.414425557Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.186741ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.417510277Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.418656927Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.14716ms
Dec 05 09:49:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:49:54 compute-0 sudo[99439]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:54 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.891380999Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.891527324Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=151.855µs
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.921146818Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.923406858Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=2.26548ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.926326934Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.927638468Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.313024ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.932549807Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.93378635Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.240583ms
Dec 05 09:49:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.96742563Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.968665962Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.243552ms
Dec 05 09:49:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.971605809Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.975866781Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.259902ms
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.988784688Z level=info msg="Executing migration" id="drop alert_definition table"
Dec 05 09:49:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:54.990182765Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.406867ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.006705718Z level=info msg="Executing migration" id="delete alert_definition_version table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.006865602Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=167.135µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.017629854Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.018907227Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.282344ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.021990557Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.02321911Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.231643ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.027618734Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.028558549Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=939.645µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.030958582Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.031010143Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=52.391µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.034348051Z level=info msg="Executing migration" id="drop alert_definition_version table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.036109897Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.761126ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.038677564Z level=info msg="Executing migration" id="create alert_instance table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.039878715Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.201181ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.042043662Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.043310345Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.265393ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.04579162Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.047120925Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.334605ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.050693548Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.057531667Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.835929ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.059623252Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.060595368Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=972.215µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.062395505Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.063206986Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=812.071µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.065044813Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Dec 05 09:49:55 compute-0 ceph-mon[74418]: 9.12 deep-scrub starts
Dec 05 09:49:55 compute-0 ceph-mon[74418]: 9.12 deep-scrub ok
Dec 05 09:49:55 compute-0 ceph-mon[74418]: 9.b scrub starts
Dec 05 09:49:55 compute-0 ceph-mon[74418]: 9.b scrub ok
Dec 05 09:49:55 compute-0 ceph-mon[74418]: osdmap e93: 3 total, 3 up, 3 in
Dec 05 09:49:55 compute-0 ceph-mon[74418]: pgmap v110: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:55 compute-0 ceph-mon[74418]: osdmap e94: 3 total, 3 up, 3 in
Dec 05 09:49:55 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.092570564Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.51673ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.095955823Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Dec 05 09:49:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:55 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:55 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.nrbvmi on compute-2
Dec 05 09:49:55 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.nrbvmi on compute-2
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.123186205Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=27.221582ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.125365952Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.126306067Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=940.994µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.128465153Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.129270654Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=802.481µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.132072287Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.13675686Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.681663ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.145637812Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.15090673Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.267228ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.155404739Z level=info msg="Executing migration" id="create alert_rule table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.156456166Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.052368ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.159225458Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.160174333Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=948.855µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.162166065Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.162980306Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=817.421µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.16579461Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Dec 05 09:49:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.166829797Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.035037ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.169504908Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.169571579Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=68.172µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.172178228Z level=info msg="Executing migration" id="add column for to alert_rule"
Dec 05 09:49:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 05 09:49:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.17724623Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.066753ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.180119335Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Dec 05 09:49:55 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.18451028Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.386175ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.196712659Z level=info msg="Executing migration" id="add column labels to alert_rule"
Dec 05 09:49:55 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 95 pg[9.11( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=95 pruub=8.344175339s) [0] r=-1 lpr=95 pi=[55,95)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 266.108825684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:55 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 95 pg[9.11( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=95 pruub=8.344134331s) [0] r=-1 lpr=95 pi=[55,95)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 266.108825684s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:55 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 95 pg[9.10( v 52'1029 (0'0,52'1029] local-lis/les=93/94 n=2 ec=55/40 lis/c=93/55 les/c/f=94/57/0 sis=95 pruub=14.995099068s) [0] async=[0] r=-1 lpr=95 pi=[55,95)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 272.760467529s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:55 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 95 pg[9.10( v 52'1029 (0'0,52'1029] local-lis/les=93/94 n=2 ec=55/40 lis/c=93/55 les/c/f=94/57/0 sis=95 pruub=14.994650841s) [0] r=-1 lpr=95 pi=[55,95)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 272.760467529s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.203498617Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.868891ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.207172703Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.208086507Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=914.424µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.210382907Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.211304021Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=919.584µs
Dec 05 09:49:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.784020185s ======
Dec 05 09:49:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:49:54.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.784020185s
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.213227861Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.21737056Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.142989ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.220650486Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.225348699Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.680353ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.228155522Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.229063686Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=909.314µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.232146226Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.236915931Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.766275ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.238911524Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.244220352Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.303168ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.247234402Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.247300593Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=66.961µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.250778734Z level=info msg="Executing migration" id="create alert_rule_version table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.25176808Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=988.886µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.255506237Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.256475533Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=970.166µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.259523853Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.260722504Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.197801ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.264511994Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.264594996Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=81.152µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.267125642Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.272149193Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.021111ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.27431449Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.279717031Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.404192ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.282119334Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.286763456Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.641582ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.289147227Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.293502102Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.354445ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.295155906Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.299485008Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.327883ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.302111477Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.30224461Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=135.183µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.304344716Z level=info msg="Executing migration" id=create_alert_configuration_table
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.305101845Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=757.389µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.307805916Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.31407443Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.259984ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.316074052Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.316141094Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=68.212µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.317834879Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.322763107Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.924988ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.325497429Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.326586638Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.090329ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.329124014Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.334066874Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.94587ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.335686936Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.336357083Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=669.637µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.338731455Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.339684071Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=952.866µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.344045204Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.349268741Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=5.203807ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.351265644Z level=info msg="Executing migration" id="create provenance_type table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.352104735Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=856.812µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.354930849Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.356031548Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.100199ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.358374979Z level=info msg="Executing migration" id="create alert_image table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.359155299Z level=info msg="Migration successfully executed" id="create alert_image table" duration=780.45µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.363538174Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.364863549Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.329325ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.36757439Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.367685083Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=113.863µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.371483513Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.37255653Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.076348ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.381489264Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.38247959Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=992.576µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.396428065Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.397020091Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.405470492Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.406288353Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=822.501µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.408183283Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.409426085Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.244192ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.411750246Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.418081241Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.302484ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.420040543Z level=info msg="Executing migration" id="create library_element table v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.421023559Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=983.606µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.428160815Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.429811509Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.649794ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.432769036Z level=info msg="Executing migration" id="create library_element_connection table v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.433465234Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=696.278µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.438888436Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.4397929Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=907.894µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.442060029Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.443020514Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=959.865µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.44515162Z level=info msg="Executing migration" id="increase max description length to 2048"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.445175711Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=24.85µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.447589784Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.447640155Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=50.801µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.450285454Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.450585412Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=300.428µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.454487904Z level=info msg="Executing migration" id="create data_keys table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.455508771Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.019927ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.458645753Z level=info msg="Executing migration" id="create secrets table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.459431963Z level=info msg="Migration successfully executed" id="create secrets table" duration=789.95µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.461740364Z level=info msg="Executing migration" id="rename data_keys name column to id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.491801311Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=30.054107ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.494575844Z level=info msg="Executing migration" id="add name column into data_keys"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.500466378Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.885964ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.503185689Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.503397274Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=212.125µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.505273064Z level=info msg="Executing migration" id="rename data_keys name column to label"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.534681483Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=29.400598ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.537543157Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.569743681Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=32.197233ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.57202806Z level=info msg="Executing migration" id="create kv_store table v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.572963255Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=935.496µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.575769078Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.576960309Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.191581ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.579038123Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.57929102Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=252.897µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.580970344Z level=info msg="Executing migration" id="create permission table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.581875648Z level=info msg="Migration successfully executed" id="create permission table" duration=904.864µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.590540894Z level=info msg="Executing migration" id="add unique index permission.role_id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.591481229Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=942.855µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.594229861Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.595084974Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=858.173µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.598896873Z level=info msg="Executing migration" id="create role table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.599772486Z level=info msg="Migration successfully executed" id="create role table" duration=875.403µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.602376794Z level=info msg="Executing migration" id="add column display_name"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.608849694Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.47228ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.610482846Z level=info msg="Executing migration" id="add column group_name"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.616543014Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.059888ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.618929117Z level=info msg="Executing migration" id="add index role.org_id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.619990595Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.058368ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.623072626Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.624190935Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.119209ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.637023101Z level=info msg="Executing migration" id="add index role_org_id_uid"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.638165221Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.145409ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.640666536Z level=info msg="Executing migration" id="create team role table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.641657642Z level=info msg="Migration successfully executed" id="create team role table" duration=991.966µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.644583768Z level=info msg="Executing migration" id="add index team_role.org_id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.645610836Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.028108ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.649203679Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.650601246Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.398346ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.653931774Z level=info msg="Executing migration" id="add index team_role.team_id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.654820676Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=890.033µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.658466952Z level=info msg="Executing migration" id="create user role table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.659201502Z level=info msg="Migration successfully executed" id="create user role table" duration=735.409µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.662770145Z level=info msg="Executing migration" id="add index user_role.org_id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.663791381Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.022606ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.6679358Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.669145801Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.211561ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.671728049Z level=info msg="Executing migration" id="add index user_role.user_id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.672809557Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.077908ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.678312741Z level=info msg="Executing migration" id="create builtin role table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.679343078Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.059688ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.683808255Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.685071598Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.266004ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.688669862Z level=info msg="Executing migration" id="add index builtin_role.name"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.689635577Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=968.095µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.693081948Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.699011403Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.928065ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.703414828Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.704496387Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.083398ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.709472207Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.710534274Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.063507ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.715042943Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.716190962Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.14938ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.718591795Z level=info msg="Executing migration" id="add unique index role.uid"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.719439377Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=848.702µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.722951219Z level=info msg="Executing migration" id="create seed assignment table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.723669808Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=716.029µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.726532493Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.727509139Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=978.205µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.730152368Z level=info msg="Executing migration" id="add column hidden to role table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.736160155Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.002087ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.739878562Z level=info msg="Executing migration" id="permission kind migration"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.748048206Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.167043ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.750627963Z level=info msg="Executing migration" id="permission attribute migration"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.756989709Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.361746ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.759319741Z level=info msg="Executing migration" id="permission identifier migration"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.765150023Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.828892ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.768747848Z level=info msg="Executing migration" id="add permission identifier index"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.769697542Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=951.164µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.773929564Z level=info msg="Executing migration" id="add permission action scope role_id index"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.774911199Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=981.545µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.777774414Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.778900463Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.129999ms
Dec 05 09:49:55 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 26 completed events
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.781188603Z level=info msg="Executing migration" id="create query_history table v1"
Dec 05 09:49:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.782075416Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=886.703µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.78565739Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.78679443Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.13727ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.790707763Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.790772514Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=65.801µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.794127552Z level=info msg="Executing migration" id="rbac disabled migrator"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.794176433Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=49.941µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.798582588Z level=info msg="Executing migration" id="teams permissions migration"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.799075771Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=493.053µs
Dec 05 09:49:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.806265379Z level=info msg="Executing migration" id="dashboard permissions"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.806924366Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=681.577µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.80936048Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.809917404Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=556.724µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.81470307Z level=info msg="Executing migration" id="drop managed folder create actions"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.815031048Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=332.498µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.818040387Z level=info msg="Executing migration" id="alerting notification permissions"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.818566061Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=526.074µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.820618055Z level=info msg="Executing migration" id="create query_history_star table v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.821367964Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=749.589µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.824361853Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.825298297Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=936.534µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.828864941Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.834835757Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.969647ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.837359143Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.837413074Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=54.611µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.839904549Z level=info msg="Executing migration" id="create correlation table v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.840869564Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=965.125µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.843672228Z level=info msg="Executing migration" id="add index correlations.uid"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.844705445Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.033947ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.847994831Z level=info msg="Executing migration" id="add index correlations.source_uid"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.848804652Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=809.721µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.852279573Z level=info msg="Executing migration" id="add correlation config column"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.858317422Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.035048ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.86057638Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.861454304Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=877.844µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.86361594Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.864473283Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=854.472µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.867564733Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.884657981Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=17.091457ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.886345725Z level=info msg="Executing migration" id="create correlation v2"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.887582577Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.236722ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.889942149Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.890782921Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=840.242µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.892976048Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.893868701Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=892.233µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.895953176Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.896792008Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=839.282µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.900568807Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.900802853Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=234.106µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.903052682Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.903880274Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=826.532µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.906494992Z level=info msg="Executing migration" id="add provisioning column"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.914752798Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.250606ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.917911011Z level=info msg="Executing migration" id="create entity_events table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.919330717Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.421286ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.922014548Z level=info msg="Executing migration" id="create dashboard public config v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.923053666Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.039737ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.925385827Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.925800047Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.927509491Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.92782845Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.929469223Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.930195652Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=728.979µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.934224408Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.935023538Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=801.82µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.940478321Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.941538639Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.060938ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.944398233Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.945302748Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=904.815µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.9496028Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.950494114Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=890.664µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.957196889Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.958132933Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=937.654µs
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.963696778Z level=info msg="Executing migration" id="Drop public config table"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.964748377Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.054099ms
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.96909771Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Dec 05 09:49:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:55.97020927Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.11491ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.020892276Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.022570549Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.684724ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:56 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c0022e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.081978083Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.083213896Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.243433ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.089598713Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.091210416Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.557551ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.095405885Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.117951225Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=22.53973ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.120782449Z level=info msg="Executing migration" id="add annotations_enabled column"
Dec 05 09:49:56 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:56 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:56 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:56 compute-0 ceph-mon[74418]: Deploying daemon haproxy.rgw.default.compute-2.nrbvmi on compute-2
Dec 05 09:49:56 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 05 09:49:56 compute-0 ceph-mon[74418]: osdmap e95: 3 total, 3 up, 3 in
Dec 05 09:49:56 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.129761525Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.969605ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.136955682Z level=info msg="Executing migration" id="add time_selection_enabled column"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.143656918Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.702516ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.146478212Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.146704888Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=228.176µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.148686509Z level=info msg="Executing migration" id="add share column"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.154590793Z level=info msg="Migration successfully executed" id="add share column" duration=5.903934ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.157968583Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.158182638Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=214.965µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.160734975Z level=info msg="Executing migration" id="create file table"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.162050829Z level=info msg="Migration successfully executed" id="create file table" duration=1.318324ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.166968248Z level=info msg="Executing migration" id="file table idx: path natural pk"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.16821833Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.249252ms
Dec 05 09:49:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.171543017Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.172831441Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.289564ms
Dec 05 09:49:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.17772421Z level=info msg="Executing migration" id="create file_meta table"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.17889827Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.176331ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.182449132Z level=info msg="Executing migration" id="file table idx: path key"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.183764987Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.315825ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.187149486Z level=info msg="Executing migration" id="set path collation in file table"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.187276569Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=133.363µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.189912358Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.18998432Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=72.822µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.192305511Z level=info msg="Executing migration" id="managed permissions migration"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.193006779Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=700.228µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.196065989Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.196392077Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=328.378µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.198909133Z level=info msg="Executing migration" id="RBAC action name migrator"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.200735371Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.824058ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.203525384Z level=info msg="Executing migration" id="Add UID column to playlist"
Dec 05 09:49:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 05 09:49:56 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.219567814Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=16.01776ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.222762398Z level=info msg="Executing migration" id="Update uid column values in playlist"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.223020175Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=259.417µs
Dec 05 09:49:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 96 pg[9.11( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=96) [0]/[1] r=0 lpr=96 pi=[55,96)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 96 pg[9.11( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=96) [0]/[1] r=0 lpr=96 pi=[55,96)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.226024383Z level=info msg="Executing migration" id="Add index for uid in playlist"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.227881692Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.861189ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.232354939Z level=info msg="Executing migration" id="update group index for alert rules"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.232706568Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=352.57µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.235211493Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.235443629Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=231.766µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.237956945Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.238432747Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=475.562µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.241159219Z level=info msg="Executing migration" id="add action column to seed_assignment"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.248008979Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.848349ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.250773721Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.257805704Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.027053ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.261215193Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.262711023Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.49681ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.267374375Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.345709625Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=78.33338ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.350969383Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.352143783Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.17705ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.35429066Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.355154622Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=861.242µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.358025667Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.380211148Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=22.18169ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.38488576Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.391693508Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.806798ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.394243615Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.394610674Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=373.539µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.396937805Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.397162411Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=224.226µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.39941329Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.399611365Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=197.955µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.401624468Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.401829714Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=204.976µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.404860903Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.4051398Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=280.397µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.408438796Z level=info msg="Executing migration" id="create folder table"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.409578516Z level=info msg="Migration successfully executed" id="create folder table" duration=1.13968ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.412687638Z level=info msg="Executing migration" id="Add index for parent_uid"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.414463684Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.775325ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.417809711Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.418815368Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.005567ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.426072887Z level=info msg="Executing migration" id="Update folder title length"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.426100208Z level=info msg="Migration successfully executed" id="Update folder title length" duration=28.761µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.428981873Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.42996552Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=983.707µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.434030906Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.434942189Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=911.393µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.43797507Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.438904124Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=928.593µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.443342729Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.44371994Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=379.661µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.446812541Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.447030096Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=217.805µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.450327532Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.45139737Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.070208ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.453468154Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.454581523Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.112689ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.457628384Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.458559068Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=930.184µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.462799038Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.463946539Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.147711ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.467639485Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.468523278Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=883.163µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.473041947Z level=info msg="Executing migration" id="create anon_device table"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.473828987Z level=info msg="Migration successfully executed" id="create anon_device table" duration=787.03µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.478549521Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.479602218Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.053377ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.483358987Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.484394344Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.031987ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.487405322Z level=info msg="Executing migration" id="create signing_key table"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.488235455Z level=info msg="Migration successfully executed" id="create signing_key table" duration=829.673µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.492158397Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.493225225Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.067007ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.496063269Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.497184119Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.12061ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.499786587Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.500081635Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=295.517µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.502024365Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.509654905Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.627789ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.512497789Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.513103935Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=607.006µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.516590767Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.51748204Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=890.953µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.520742925Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.521740921Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.000996ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.524529614Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.525550961Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.021298ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.532941724Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.534137075Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.197951ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.537053372Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.537981416Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=924.384µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.542223317Z level=info msg="Executing migration" id="create sso_setting table"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.54310395Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=880.243µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:56 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.772835551Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.77396037Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.130799ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.783990853Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.784407884Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=420.991µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.788092061Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.788151133Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=60.632µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.791458709Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.798314298Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.850629ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.800903686Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.808731271Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.820244ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.811984266Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.812452208Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=470.202µs
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=migrator t=2025-12-05T09:49:56.814591964Z level=info msg="migrations completed" performed=547 skipped=0 duration=3.999714674s
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=sqlstore t=2025-12-05T09:49:56.815893028Z level=info msg="Created default organization"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=secrets t=2025-12-05T09:49:56.818592238Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=plugin.store t=2025-12-05T09:49:56.847056354Z level=info msg="Loading plugins..."
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=local.finder t=2025-12-05T09:49:56.929958613Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=plugin.store t=2025-12-05T09:49:56.929999834Z level=info msg="Plugins loaded" count=55 duration=82.944ms
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=query_data t=2025-12-05T09:49:56.933190297Z level=info msg="Query Service initialization"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=live.push_http t=2025-12-05T09:49:56.936878234Z level=info msg="Live Push Gateway initialization"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ngalert.migration t=2025-12-05T09:49:56.943761434Z level=info msg=Starting
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ngalert.migration t=2025-12-05T09:49:56.944161644Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ngalert.migration orgID=1 t=2025-12-05T09:49:56.944545964Z level=info msg="Migrating alerts for organisation"
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ngalert.migration orgID=1 t=2025-12-05T09:49:56.945093979Z level=info msg="Alerts found to migrate" alerts=0
Dec 05 09:49:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ngalert.migration t=2025-12-05T09:49:56.946707151Z level=info msg="Completed alerting migration"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ngalert.state.manager t=2025-12-05T09:49:57.037966219Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=infra.usagestats.collector t=2025-12-05T09:49:57.0410632Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=provisioning.datasources t=2025-12-05T09:49:57.042559619Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:57 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=provisioning.alerting t=2025-12-05T09:49:57.119127113Z level=info msg="starting to provision alerting"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=provisioning.alerting t=2025-12-05T09:49:57.119186884Z level=info msg="finished to provision alerting"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ngalert.state.manager t=2025-12-05T09:49:57.119541153Z level=info msg="Warming state cache for startup"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ngalert.multiorg.alertmanager t=2025-12-05T09:49:57.11979763Z level=info msg="Starting MultiOrg Alertmanager"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=grafanaStorageLogger t=2025-12-05T09:49:57.120032286Z level=info msg="Storage starting"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ngalert.state.manager t=2025-12-05T09:49:57.12016812Z level=info msg="State cache has been initialized" states=0 duration=626.877µs
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ngalert.scheduler t=2025-12-05T09:49:57.120217461Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ticker t=2025-12-05T09:49:57.120319144Z level=info msg=starting first_tick=2025-12-05T09:50:00Z
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=http.server t=2025-12-05T09:49:57.124072281Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=http.server t=2025-12-05T09:49:57.124439402Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=provisioning.dashboard t=2025-12-05T09:49:57.132477811Z level=info msg="starting to provision dashboards"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=grafana.update.checker t=2025-12-05T09:49:57.207415412Z level=info msg="Update check succeeded" duration=86.587175ms
Dec 05 09:49:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=plugins.update.checker t=2025-12-05T09:49:57.213602004Z level=info msg="Update check succeeded" duration=93.752083ms
Dec 05 09:49:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:49:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:49:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:49:57.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=sqlstore.transactions t=2025-12-05T09:49:57.279263023Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=sqlstore.transactions t=2025-12-05T09:49:57.293517926Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=sqlstore.transactions t=2025-12-05T09:49:57.307072931Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=sqlstore.transactions t=2025-12-05T09:49:57.322861563Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 05 09:49:57 compute-0 ceph-mon[74418]: pgmap v113: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:57 compute-0 ceph-mon[74418]: osdmap e96: 3 total, 3 up, 3 in
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=provisioning.dashboard t=2025-12-05T09:49:57.490979372Z level=info msg="finished to provision dashboards"
Dec 05 09:49:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 05 09:49:57 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=grafana-apiserver t=2025-12-05T09:49:57.688492961Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec 05 09:49:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=grafana-apiserver t=2025-12-05T09:49:57.688946873Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec 05 09:49:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:49:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:49:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:49:57.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:49:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:49:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:57 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 97 pg[9.11( v 52'1029 (0'0,52'1029] local-lis/les=96/97 n=5 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=96) [0]/[1] async=[0] r=0 lpr=96 pi=[55,96)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:49:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:49:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 05 09:49:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:58 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Dec 05 09:49:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:58 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:58 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:58 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.unmbjp on compute-2
Dec 05 09:49:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.unmbjp on compute-2
Dec 05 09:49:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:58 compute-0 ceph-mon[74418]: osdmap e97: 3 total, 3 up, 3 in
Dec 05 09:49:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:58 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:58 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:58 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:58 compute-0 ceph-mon[74418]: Deploying daemon keepalived.rgw.default.compute-2.unmbjp on compute-2
Dec 05 09:49:58 compute-0 ceph-mon[74418]: pgmap v116: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:49:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:58 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c0022e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:49:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 05 09:49:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:49:59 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:49:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 05 09:49:59 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 05 09:49:59 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 98 pg[9.11( v 52'1029 (0'0,52'1029] local-lis/les=96/97 n=5 ec=55/40 lis/c=96/55 les/c/f=97/57/0 sis=98 pruub=14.834662437s) [0] async=[0] r=-1 lpr=98 pi=[55,98)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 276.547363281s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:49:59 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 98 pg[9.11( v 52'1029 (0'0,52'1029] local-lis/les=96/97 n=5 ec=55/40 lis/c=96/55 les/c/f=97/57/0 sis=98 pruub=14.834513664s) [0] r=-1 lpr=98 pi=[55,98)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 276.547363281s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.152679) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928199152767, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 634, "num_deletes": 257, "total_data_size": 612591, "memory_usage": 625800, "flush_reason": "Manual Compaction"}
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928199163593, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 588867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8138, "largest_seqno": 8770, "table_properties": {"data_size": 585323, "index_size": 1324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8364, "raw_average_key_size": 18, "raw_value_size": 577719, "raw_average_value_size": 1272, "num_data_blocks": 59, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928184, "oldest_key_time": 1764928184, "file_creation_time": 1764928199, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 11174 microseconds, and 5681 cpu microseconds.
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.163857) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 588867 bytes OK
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.163971) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.165967) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.165994) EVENT_LOG_v1 {"time_micros": 1764928199165987, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.166017) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 608940, prev total WAL file size 626159, number of live WAL files 2.
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.167158) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323538' seq:0, type:0; will stop at (end)
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(575KB)], [20(11MB)]
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928199167213, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 12618750, "oldest_snapshot_seqno": -1}
Dec 05 09:49:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:49:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:49:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:49:59.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3606 keys, 12150780 bytes, temperature: kUnknown
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928199384092, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 12150780, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12121731, "index_size": 18955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9029, "raw_key_size": 93141, "raw_average_key_size": 25, "raw_value_size": 12050496, "raw_average_value_size": 3341, "num_data_blocks": 820, "num_entries": 3606, "num_filter_entries": 3606, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764928199, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.384588) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12150780 bytes
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.387174) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.1 rd, 56.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.5 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(42.1) write-amplify(20.6) OK, records in: 4140, records dropped: 534 output_compression: NoCompression
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.387212) EVENT_LOG_v1 {"time_micros": 1764928199387197, "job": 6, "event": "compaction_finished", "compaction_time_micros": 217075, "compaction_time_cpu_micros": 25694, "output_level": 6, "num_output_files": 1, "total_output_size": 12150780, "num_input_records": 4140, "num_output_records": 3606, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928199387659, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928199391431, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.167022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.391550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.391556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.391557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.391559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:49:59 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:49:59.391560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:49:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:49:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:49:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:49:59.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:49:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:49:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:49:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 05 09:49:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:49:59 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:49:59 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:49:59 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.nltyrb on compute-0
Dec 05 09:49:59 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.nltyrb on compute-0
Dec 05 09:49:59 compute-0 sudo[99904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:49:59 compute-0 sudo[99904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:49:59 compute-0 sudo[99904]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 09:50:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 09:50:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.0 observed slow operation indications in BlueStore
Dec 05 09:50:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Dec 05 09:50:00 compute-0 sudo[99929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:00 compute-0 sudo[99929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:00 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c0022e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 05 09:50:00 compute-0 ceph-mon[74418]: osdmap e98: 3 total, 3 up, 3 in
Dec 05 09:50:00 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:00 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:00 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:00 compute-0 ceph-mon[74418]: Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 09:50:00 compute-0 ceph-mon[74418]: [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 09:50:00 compute-0 ceph-mon[74418]:      osd.0 observed slow operation indications in BlueStore
Dec 05 09:50:00 compute-0 ceph-mon[74418]:      osd.1 observed slow operation indications in BlueStore
Dec 05 09:50:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 05 09:50:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 05 09:50:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:00 compute-0 podman[99998]: 2025-12-05 09:50:00.637645011 +0000 UTC m=+0.056113549 container create c4a2f0c355438b1fb5b1e38250542502ea9659f511b67167b45190c3ee59b483 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_easley, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, vcs-type=git, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container)
Dec 05 09:50:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:00 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:00 compute-0 systemd[1]: Started libpod-conmon-c4a2f0c355438b1fb5b1e38250542502ea9659f511b67167b45190c3ee59b483.scope.
Dec 05 09:50:00 compute-0 podman[99998]: 2025-12-05 09:50:00.617756251 +0000 UTC m=+0.036224709 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 05 09:50:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:00 compute-0 podman[99998]: 2025-12-05 09:50:00.735505922 +0000 UTC m=+0.153974430 container init c4a2f0c355438b1fb5b1e38250542502ea9659f511b67167b45190c3ee59b483 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_easley, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, description=keepalived for Ceph, vcs-type=git, name=keepalived, version=2.2.4, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9)
Dec 05 09:50:00 compute-0 podman[99998]: 2025-12-05 09:50:00.741516759 +0000 UTC m=+0.159985207 container start c4a2f0c355438b1fb5b1e38250542502ea9659f511b67167b45190c3ee59b483 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_easley, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.openshift.expose-services=, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 09:50:00 compute-0 podman[99998]: 2025-12-05 09:50:00.745528904 +0000 UTC m=+0.163997422 container attach c4a2f0c355438b1fb5b1e38250542502ea9659f511b67167b45190c3ee59b483 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_easley, version=2.2.4, distribution-scope=public, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-type=git, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 09:50:00 compute-0 ecstatic_easley[100016]: 0 0
Dec 05 09:50:00 compute-0 systemd[1]: libpod-c4a2f0c355438b1fb5b1e38250542502ea9659f511b67167b45190c3ee59b483.scope: Deactivated successfully.
Dec 05 09:50:00 compute-0 conmon[100016]: conmon c4a2f0c355438b1fb5b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4a2f0c355438b1fb5b1e38250542502ea9659f511b67167b45190c3ee59b483.scope/container/memory.events
Dec 05 09:50:00 compute-0 podman[99998]: 2025-12-05 09:50:00.748487231 +0000 UTC m=+0.166955679 container died c4a2f0c355438b1fb5b1e38250542502ea9659f511b67167b45190c3ee59b483 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_easley, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, distribution-scope=public, name=keepalived)
Dec 05 09:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1bc96cd79243bf833738f1fbe39523a28ae018968da0876a1250d949152fa8e-merged.mount: Deactivated successfully.
Dec 05 09:50:00 compute-0 podman[99998]: 2025-12-05 09:50:00.821696117 +0000 UTC m=+0.240164545 container remove c4a2f0c355438b1fb5b1e38250542502ea9659f511b67167b45190c3ee59b483 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_easley, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.expose-services=, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph)
Dec 05 09:50:00 compute-0 systemd[1]: libpod-conmon-c4a2f0c355438b1fb5b1e38250542502ea9659f511b67167b45190c3ee59b483.scope: Deactivated successfully.
Dec 05 09:50:00 compute-0 systemd[1]: Reloading.
Dec 05 09:50:01 compute-0 systemd-rc-local-generator[100067]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:50:01 compute-0 systemd-sysv-generator[100074]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:01 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:01 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 05 09:50:01 compute-0 ceph-mon[74418]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 05 09:50:01 compute-0 ceph-mon[74418]: Deploying daemon keepalived.rgw.default.compute-0.nltyrb on compute-0
Dec 05 09:50:01 compute-0 ceph-mon[74418]: osdmap e99: 3 total, 3 up, 3 in
Dec 05 09:50:01 compute-0 ceph-mon[74418]: pgmap v119: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec 05 09:50:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:01.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 05 09:50:01 compute-0 systemd[1]: Reloading.
Dec 05 09:50:01 compute-0 systemd-sysv-generator[100113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:50:01 compute-0 systemd-rc-local-generator[100110]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:50:01 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.nltyrb for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:50:01 compute-0 sudo[99757]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:01.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:01 compute-0 podman[100190]: 2025-12-05 09:50:01.882418633 +0000 UTC m=+0.056950671 container create ea3c33996d7a24a951431bdbd16be7a640fa5a87275d8dcdf4a6892554b69e8d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, name=keepalived, description=keepalived for Ceph, version=2.2.4, io.openshift.expose-services=, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1793, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0a15180173cebbb60980de4631087a1879c4c0d7cc95812fe4dfc2b9e6a6faa/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:01 compute-0 podman[100190]: 2025-12-05 09:50:01.935274596 +0000 UTC m=+0.109806664 container init ea3c33996d7a24a951431bdbd16be7a640fa5a87275d8dcdf4a6892554b69e8d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb, description=keepalived for Ceph, com.redhat.component=keepalived-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, version=2.2.4, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, io.buildah.version=1.28.2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, name=keepalived, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec 05 09:50:01 compute-0 podman[100190]: 2025-12-05 09:50:01.94498574 +0000 UTC m=+0.119517778 container start ea3c33996d7a24a951431bdbd16be7a640fa5a87275d8dcdf4a6892554b69e8d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, version=2.2.4, build-date=2023-02-22T09:23:20)
Dec 05 09:50:01 compute-0 podman[100190]: 2025-12-05 09:50:01.851043882 +0000 UTC m=+0.025575970 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 05 09:50:01 compute-0 bash[100190]: ea3c33996d7a24a951431bdbd16be7a640fa5a87275d8dcdf4a6892554b69e8d
Dec 05 09:50:01 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.nltyrb for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:01 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:01 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:01 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:01 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:01 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:01 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:01 2025: Starting VRRP child process, pid=4
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:01 2025: Startup complete
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:50:01 2025: (VI_0) Entering BACKUP STATE
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:01 2025: (VI_0) Entering BACKUP STATE (init)
Dec 05 09:50:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:01 2025: VRRP_Script(check_backend) succeeded
Dec 05 09:50:02 compute-0 sudo[99929]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 05 09:50:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:02 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev f91c231f-bb22-4de4-94fb-e77a985fe922 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec 05 09:50:02 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event f91c231f-bb22-4de4-94fb-e77a985fe922 (Updating ingress.rgw.default deployment (+4 -> 4)) in 9 seconds
Dec 05 09:50:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 05 09:50:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:02 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:02 compute-0 ceph-mgr[74711]: [progress INFO root] update: starting ev 72468d2e-0073-440e-a6ab-b77b64c4f593 (Updating prometheus deployment (+1 -> 1))
Dec 05 09:50:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 05 09:50:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec 05 09:50:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 05 09:50:02 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Dec 05 09:50:02 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Dec 05 09:50:02 compute-0 sshd-session[99066]: Connection closed by 192.168.122.30 port 46854
Dec 05 09:50:02 compute-0 sshd-session[99063]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:50:02 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Dec 05 09:50:02 compute-0 systemd[1]: session-37.scope: Consumed 8.663s CPU time.
Dec 05 09:50:02 compute-0 systemd-logind[789]: Session 37 logged out. Waiting for processes to exit.
Dec 05 09:50:02 compute-0 systemd-logind[789]: Removed session 37.
Dec 05 09:50:02 compute-0 sudo[100214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:02 compute-0 sudo[100214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:02 compute-0 sudo[100214]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:02 compute-0 sudo[100239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:02 compute-0 sudo[100239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:50:02 2025: (VI_0) Entering MASTER STATE
Dec 05 09:50:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:02 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c003a20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:03 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:03.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 05 09:50:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:50:03 2025: (VI_0) received an invalid passwd!
Dec 05 09:50:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:03 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Dec 05 09:50:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:03.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:04 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:04 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:04 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:04 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:04 compute-0 ceph-mon[74418]: pgmap v120: 353 pgs: 353 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 05 09:50:04 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 05 09:50:04 compute-0 ceph-mon[74418]: Deploying daemon prometheus.compute-0 on compute-0
Dec 05 09:50:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 05 09:50:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 05 09:50:04 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 05 09:50:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:04 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:50:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 353 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 05 09:50:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec 05 09:50:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 05 09:50:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:04 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Dec 05 09:50:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:50:04 2025: (VI_0) received an invalid passwd!
Dec 05 09:50:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:04 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 05 09:50:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:05 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c003a20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:05.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 100 pg[9.12( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=100 pruub=14.282114983s) [0] r=-1 lpr=100 pi=[55,100)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 282.110168457s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:05 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 100 pg[9.12( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=100 pruub=14.282077789s) [0] r=-1 lpr=100 pi=[55,100)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 282.110168457s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:50:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:50:05 2025: (VI_0) received an invalid passwd!
Dec 05 09:50:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:05 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Dec 05 09:50:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:05 2025: (VI_0) Entering MASTER STATE
Dec 05 09:50:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:05.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:05 compute-0 ceph-mgr[74711]: [progress INFO root] Writing back 27 completed events
Dec 05 09:50:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:50:06 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 05 09:50:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 05 09:50:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:06 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 1 unknown, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 436 B/s rd, 0 op/s; 15 B/s, 0 objects/s recovering
Dec 05 09:50:06 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 05 09:50:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:50:06 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Dec 05 09:50:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:06 2025: (VI_0) received an invalid passwd!
Dec 05 09:50:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:06 2025: (VI_0) received an invalid passwd!
Dec 05 09:50:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:50:06 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Dec 05 09:50:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf[98244]: Fri Dec  5 09:50:06 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Dec 05 09:50:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-rgw-default-compute-0-nltyrb[100205]: Fri Dec  5 09:50:06 2025: (VI_0) received an invalid passwd!
Dec 05 09:50:06 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 05 09:50:06 compute-0 ceph-mon[74418]: osdmap e100: 3 total, 3 up, 3 in
Dec 05 09:50:06 compute-0 ceph-mon[74418]: pgmap v122: 353 pgs: 353 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 05 09:50:06 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 05 09:50:06 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:06 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 05 09:50:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:07 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:07.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:07.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:08 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c003bc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 1 unknown, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Dec 05 09:50:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:08 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:09 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c003bc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:09.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 05 09:50:09 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 05 09:50:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 102 pg[9.12( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=102) [0]/[1] r=0 lpr=102 pi=[55,102)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:09 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 102 pg[9.12( v 52'1029 (0'0,52'1029] local-lis/les=55/57 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=102) [0]/[1] r=0 lpr=102 pi=[55,102)/1 crt=52'1029 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 09:50:09 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 05 09:50:09 compute-0 ceph-mon[74418]: pgmap v123: 353 pgs: 1 unknown, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 436 B/s rd, 0 op/s; 15 B/s, 0 objects/s recovering
Dec 05 09:50:09 compute-0 ceph-mon[74418]: osdmap e101: 3 total, 3 up, 3 in
Dec 05 09:50:09 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:09.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:10 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 1 unknown, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 500 B/s rd, 0 op/s
Dec 05 09:50:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:10 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 05 09:50:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:11 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:11.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:11.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:12 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c003bc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 1 unknown, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 05 09:50:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:12 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:13 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:13.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 05 09:50:13 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 05 09:50:13 compute-0 ceph-mon[74418]: pgmap v125: 353 pgs: 1 unknown, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Dec 05 09:50:13 compute-0 ceph-mon[74418]: osdmap e102: 3 total, 3 up, 3 in
Dec 05 09:50:13 compute-0 ceph-mon[74418]: pgmap v127: 353 pgs: 1 unknown, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 500 B/s rd, 0 op/s
Dec 05 09:50:13 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 103 pg[9.12( v 52'1029 (0'0,52'1029] local-lis/les=102/103 n=4 ec=55/40 lis/c=55/55 les/c/f=57/57/0 sis=102) [0]/[1] async=[0] r=0 lpr=102 pi=[55,102)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:50:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:13.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:14 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c0091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 1 unknown, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 128 B/s rd, 0 op/s
Dec 05 09:50:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:50:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 05 09:50:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 05 09:50:14 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 05 09:50:14 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 104 pg[9.12( v 52'1029 (0'0,52'1029] local-lis/les=102/103 n=4 ec=55/40 lis/c=102/55 les/c/f=103/57/0 sis=104 pruub=15.087646484s) [0] async=[0] r=-1 lpr=104 pi=[55,104)/1 crt=52'1029 lcod 0'0 mlcod 0'0 active pruub 292.283508301s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:14 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 104 pg[9.12( v 52'1029 (0'0,52'1029] local-lis/les=102/103 n=4 ec=55/40 lis/c=102/55 les/c/f=103/57/0 sis=104 pruub=15.087550163s) [0] r=-1 lpr=104 pi=[55,104)/1 crt=52'1029 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 292.283508301s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 09:50:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:14 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c003bc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:14 compute-0 ceph-mon[74418]: pgmap v128: 353 pgs: 1 unknown, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 05 09:50:14 compute-0 ceph-mon[74418]: osdmap e103: 3 total, 3 up, 3 in
Dec 05 09:50:14 compute-0 ceph-mon[74418]: pgmap v130: 353 pgs: 1 unknown, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 128 B/s rd, 0 op/s
Dec 05 09:50:14 compute-0 ceph-mon[74418]: osdmap e104: 3 total, 3 up, 3 in
Dec 05 09:50:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:15 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:15.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:15 compute-0 podman[100310]: 2025-12-05 09:50:15.438510022 +0000 UTC m=+12.574543687 volume create b50fa05d5993f507a9fc6c541864a3b51c34ee3074729d6f141542b23e0c112f
Dec 05 09:50:15 compute-0 podman[100310]: 2025-12-05 09:50:15.414544275 +0000 UTC m=+12.550577960 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 05 09:50:15 compute-0 podman[100310]: 2025-12-05 09:50:15.494499057 +0000 UTC m=+12.630532722 container create b6bd060fedd083d5e95aef69cba845a80a53a7d92b2ba4c1f0e4b5f7a2b48952 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_gagarin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:15 compute-0 systemd[1]: Started libpod-conmon-b6bd060fedd083d5e95aef69cba845a80a53a7d92b2ba4c1f0e4b5f7a2b48952.scope.
Dec 05 09:50:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c380571946d292ad82cdb5f8352065080bc5e43dce4f4ec44c9ed5cdf7e486a/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 05 09:50:15 compute-0 podman[100310]: 2025-12-05 09:50:15.628486843 +0000 UTC m=+12.764520558 container init b6bd060fedd083d5e95aef69cba845a80a53a7d92b2ba4c1f0e4b5f7a2b48952 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_gagarin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:15 compute-0 podman[100310]: 2025-12-05 09:50:15.636624616 +0000 UTC m=+12.772658281 container start b6bd060fedd083d5e95aef69cba845a80a53a7d92b2ba4c1f0e4b5f7a2b48952 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_gagarin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:15 compute-0 laughing_gagarin[100581]: 65534 65534
Dec 05 09:50:15 compute-0 systemd[1]: libpod-b6bd060fedd083d5e95aef69cba845a80a53a7d92b2ba4c1f0e4b5f7a2b48952.scope: Deactivated successfully.
Dec 05 09:50:15 compute-0 conmon[100581]: conmon b6bd060fedd083d5e95a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b6bd060fedd083d5e95aef69cba845a80a53a7d92b2ba4c1f0e4b5f7a2b48952.scope/container/memory.events
Dec 05 09:50:15 compute-0 podman[100310]: 2025-12-05 09:50:15.646014571 +0000 UTC m=+12.782048266 container attach b6bd060fedd083d5e95aef69cba845a80a53a7d92b2ba4c1f0e4b5f7a2b48952 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_gagarin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:15 compute-0 podman[100310]: 2025-12-05 09:50:15.646581716 +0000 UTC m=+12.782615401 container died b6bd060fedd083d5e95aef69cba845a80a53a7d92b2ba4c1f0e4b5f7a2b48952 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_gagarin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 05 09:50:15 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 05 09:50:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c380571946d292ad82cdb5f8352065080bc5e43dce4f4ec44c9ed5cdf7e486a-merged.mount: Deactivated successfully.
Dec 05 09:50:15 compute-0 podman[100310]: 2025-12-05 09:50:15.703863405 +0000 UTC m=+12.839897060 container remove b6bd060fedd083d5e95aef69cba845a80a53a7d92b2ba4c1f0e4b5f7a2b48952 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_gagarin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:15 compute-0 podman[100310]: 2025-12-05 09:50:15.707397018 +0000 UTC m=+12.843430683 volume remove b50fa05d5993f507a9fc6c541864a3b51c34ee3074729d6f141542b23e0c112f
Dec 05 09:50:15 compute-0 systemd[1]: libpod-conmon-b6bd060fedd083d5e95aef69cba845a80a53a7d92b2ba4c1f0e4b5f7a2b48952.scope: Deactivated successfully.
Dec 05 09:50:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:15.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:15 compute-0 podman[100600]: 2025-12-05 09:50:15.783976671 +0000 UTC m=+0.041273200 volume create b80e607f92c29e684b1ca06f9135cb23998005cd9f711016dbef46e416fa3849
Dec 05 09:50:15 compute-0 podman[100600]: 2025-12-05 09:50:15.79077544 +0000 UTC m=+0.048071969 container create bf923402986edfc88ec277627efe452f24391578fc3295576e56df78d0265513 (image=quay.io/prometheus/prometheus:v2.51.0, name=silly_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:15 compute-0 systemd[1]: Started libpod-conmon-bf923402986edfc88ec277627efe452f24391578fc3295576e56df78d0265513.scope.
Dec 05 09:50:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffaea466fa8aacec712e1ce62ce3c079f209603a8888f33458709a8a2f6461a/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:15 compute-0 podman[100600]: 2025-12-05 09:50:15.768758003 +0000 UTC m=+0.026054552 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 05 09:50:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:16 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 05 09:50:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec 05 09:50:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 05 09:50:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:50:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:50:16 compute-0 podman[100600]: 2025-12-05 09:50:16.216637243 +0000 UTC m=+0.473933792 container init bf923402986edfc88ec277627efe452f24391578fc3295576e56df78d0265513 (image=quay.io/prometheus/prometheus:v2.51.0, name=silly_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:16 compute-0 podman[100600]: 2025-12-05 09:50:16.223310507 +0000 UTC m=+0.480607046 container start bf923402986edfc88ec277627efe452f24391578fc3295576e56df78d0265513 (image=quay.io/prometheus/prometheus:v2.51.0, name=silly_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:16 compute-0 silly_golick[100616]: 65534 65534
Dec 05 09:50:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:50:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:50:16 compute-0 systemd[1]: libpod-bf923402986edfc88ec277627efe452f24391578fc3295576e56df78d0265513.scope: Deactivated successfully.
Dec 05 09:50:16 compute-0 podman[100600]: 2025-12-05 09:50:16.229292633 +0000 UTC m=+0.486589212 container attach bf923402986edfc88ec277627efe452f24391578fc3295576e56df78d0265513 (image=quay.io/prometheus/prometheus:v2.51.0, name=silly_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:16 compute-0 podman[100600]: 2025-12-05 09:50:16.229613243 +0000 UTC m=+0.486909782 container died bf923402986edfc88ec277627efe452f24391578fc3295576e56df78d0265513 (image=quay.io/prometheus/prometheus:v2.51.0, name=silly_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:50:16 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:50:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ffaea466fa8aacec712e1ce62ce3c079f209603a8888f33458709a8a2f6461a-merged.mount: Deactivated successfully.
Dec 05 09:50:16 compute-0 podman[100600]: 2025-12-05 09:50:16.279657191 +0000 UTC m=+0.536953730 container remove bf923402986edfc88ec277627efe452f24391578fc3295576e56df78d0265513 (image=quay.io/prometheus/prometheus:v2.51.0, name=silly_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:16 compute-0 podman[100600]: 2025-12-05 09:50:16.282499476 +0000 UTC m=+0.539796015 volume remove b80e607f92c29e684b1ca06f9135cb23998005cd9f711016dbef46e416fa3849
Dec 05 09:50:16 compute-0 systemd[1]: libpod-conmon-bf923402986edfc88ec277627efe452f24391578fc3295576e56df78d0265513.scope: Deactivated successfully.
Dec 05 09:50:16 compute-0 systemd[1]: Reloading.
Dec 05 09:50:16 compute-0 systemd-rc-local-generator[100658]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:50:16 compute-0 systemd-sysv-generator[100662]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:50:16 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 84504327-baa6-4442-a1f6-d7f2634264bf (Global Recovery Event) in 31 seconds
Dec 05 09:50:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 05 09:50:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:16 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c001370 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:16 compute-0 systemd[1]: Reloading.
Dec 05 09:50:16 compute-0 systemd-sysv-generator[100709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:50:16 compute-0 systemd-rc-local-generator[100705]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:50:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:17 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec 05 09:50:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:17.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 05 09:50:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 05 09:50:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 05 09:50:17 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 05 09:50:17 compute-0 ceph-mon[74418]: osdmap e105: 3 total, 3 up, 3 in
Dec 05 09:50:17 compute-0 ceph-mon[74418]: pgmap v133: 353 pgs: 353 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 05 09:50:17 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 05 09:50:17 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:50:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:17.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:17 compute-0 sshd-session[100730]: Accepted publickey for zuul from 192.168.122.30 port 60574 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:50:17 compute-0 systemd-logind[789]: New session 38 of user zuul.
Dec 05 09:50:17 compute-0 systemd[1]: Started Session 38 of User zuul.
Dec 05 09:50:17 compute-0 sshd-session[100730]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:50:17 compute-0 podman[100763]: 2025-12-05 09:50:17.973619698 +0000 UTC m=+0.045096441 container create 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699ffee169a7160969eb374d02c84e1e89be37af123bd2080777af316c58e489/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699ffee169a7160969eb374d02c84e1e89be37af123bd2080777af316c58e489/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:18 compute-0 podman[100763]: 2025-12-05 09:50:17.954348893 +0000 UTC m=+0.025825656 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:18 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64000d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:18 compute-0 podman[100763]: 2025-12-05 09:50:18.126800736 +0000 UTC m=+0.198277499 container init 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:18 compute-0 podman[100763]: 2025-12-05 09:50:18.131528889 +0000 UTC m=+0.203005632 container start 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.171Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.171Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.171Z caller=main.go:623 level=info host_details="(Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 x86_64 compute-0 (none))"
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.171Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.171Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.177Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.178Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Dec 05 09:50:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 444 B/s rd, 0 op/s; 23 B/s, 0 objects/s recovering
Dec 05 09:50:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec 05 09:50:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.182Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.182Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.184Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.184Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.28µs
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.184Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.185Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.185Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=31.721µs wal_replay_duration=509.654µs wbl_replay_duration=170ns total_replay_duration=564.175µs
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.188Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.188Z caller=main.go:1153 level=info msg="TSDB started"
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.188Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Dec 05 09:50:18 compute-0 bash[100763]: 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8
Dec 05 09:50:18 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.223Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=34.478071ms db_storage=2.08µs remote_storage=2.76µs web_handler=620ns query_engine=1.38µs scrape=7.378934ms scrape_sd=255.606µs notify=22.291µs notify_sd=18.09µs rules=26.028101ms tracing=11.15µs
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.223Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0[100799]: ts=2025-12-05T09:50:18.223Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Dec 05 09:50:18 compute-0 sudo[100239]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 05 09:50:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:18 compute-0 ceph-mgr[74711]: [progress INFO root] complete: finished ev 72468d2e-0073-440e-a6ab-b77b64c4f593 (Updating prometheus deployment (+1 -> 1))
Dec 05 09:50:18 compute-0 ceph-mgr[74711]: [progress INFO root] Completed event 72468d2e-0073-440e-a6ab-b77b64c4f593 (Updating prometheus deployment (+1 -> 1)) in 16 seconds
Dec 05 09:50:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Dec 05 09:50:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec 05 09:50:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 05 09:50:18 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 05 09:50:18 compute-0 ceph-mon[74418]: osdmap e106: 3 total, 3 up, 3 in
Dec 05 09:50:18 compute-0 ceph-mon[74418]: pgmap v135: 353 pgs: 353 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 444 B/s rd, 0 op/s; 23 B/s, 0 objects/s recovering
Dec 05 09:50:18 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 05 09:50:18 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:18 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:18 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:18 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec 05 09:50:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 05 09:50:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 05 09:50:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 05 09:50:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:18 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:18 compute-0 python3.9[100947]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 05 09:50:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:19 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002370 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:19.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:19.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:20 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 552 B/s rd, 0 op/s; 19 B/s, 0 objects/s recovering
Dec 05 09:50:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:50:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 05 09:50:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec 05 09:50:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 05 09:50:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr respawn  1: '-n'
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr respawn  2: 'mgr.compute-0.hvnxai'
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr respawn  3: '-f'
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr respawn  4: '--setuser'
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr respawn  5: 'ceph'
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr respawn  6: '--setgroup'
Dec 05 09:50:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.hvnxai(active, since 2m), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:50:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 05 09:50:20 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 05 09:50:20 compute-0 ceph-mon[74418]: osdmap e107: 3 total, 3 up, 3 in
Dec 05 09:50:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 05 09:50:20 compute-0 sshd-session[92080]: Connection closed by 192.168.122.100 port 56438
Dec 05 09:50:20 compute-0 sshd-session[92050]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:50:20 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Dec 05 09:50:20 compute-0 systemd[1]: session-35.scope: Consumed 1min 934ms CPU time.
Dec 05 09:50:20 compute-0 systemd-logind[789]: Session 35 logged out. Waiting for processes to exit.
Dec 05 09:50:20 compute-0 systemd-logind[789]: Removed session 35.
Dec 05 09:50:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ignoring --setuser ceph since I am not root
Dec 05 09:50:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ignoring --setgroup ceph since I am not root
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: pidfile_write: ignore empty --pid-file
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'alerts'
Dec 05 09:50:20 compute-0 python3.9[101121]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:50:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:20.596+0000 7f68e1ef5140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'balancer'
Dec 05 09:50:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:20 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf640018b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:20.696+0000 7f68e1ef5140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:50:20 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'cephadm'
Dec 05 09:50:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:21 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002370 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:21.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:21 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'crash'
Dec 05 09:50:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 05 09:50:21 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 05 09:50:21 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec 05 09:50:21 compute-0 ceph-mon[74418]: mgrmap e25: compute-0.hvnxai(active, since 2m), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:50:21 compute-0 ceph-mon[74418]: osdmap e108: 3 total, 3 up, 3 in
Dec 05 09:50:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:21.596+0000 7f68e1ef5140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:50:21 compute-0 ceph-mgr[74711]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:50:21 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'dashboard'
Dec 05 09:50:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 05 09:50:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 05 09:50:21 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 05 09:50:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:21.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:22 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:22 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'devicehealth'
Dec 05 09:50:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:22.308+0000 7f68e1ef5140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:50:22 compute-0 ceph-mgr[74711]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:50:22 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'diskprediction_local'
Dec 05 09:50:22 compute-0 sudo[101310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnisgdmisyctdxojcfnmlnevnwedvwzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928221.9638968-93-276293332646844/AnsiballZ_command.py'
Dec 05 09:50:22 compute-0 sudo[101310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:50:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 05 09:50:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 05 09:50:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]:   from numpy import show_config as show_numpy_config
Dec 05 09:50:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:22.489+0000 7f68e1ef5140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:50:22 compute-0 ceph-mgr[74711]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:50:22 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'influx'
Dec 05 09:50:22 compute-0 python3.9[101312]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:50:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:22.565+0000 7f68e1ef5140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:50:22 compute-0 ceph-mgr[74711]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:50:22 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'insights'
Dec 05 09:50:22 compute-0 sudo[101310]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:22 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'iostat'
Dec 05 09:50:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:22 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:22.719+0000 7f68e1ef5140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:50:22 compute-0 ceph-mgr[74711]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:50:22 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'k8sevents'
Dec 05 09:50:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:23 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf640018b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:23 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'localpool'
Dec 05 09:50:23 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mds_autoscaler'
Dec 05 09:50:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:23.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:23 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'mirroring'
Dec 05 09:50:23 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'nfs'
Dec 05 09:50:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:23.776+0000 7f68e1ef5140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:50:23 compute-0 ceph-mgr[74711]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:50:23 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'orchestrator'
Dec 05 09:50:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:23.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:24.052+0000 7f68e1ef5140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_perf_query'
Dec 05 09:50:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:24 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002d00 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:24.145+0000 7f68e1ef5140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'osd_support'
Dec 05 09:50:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:24.222+0000 7f68e1ef5140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'pg_autoscaler'
Dec 05 09:50:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:24.316+0000 7f68e1ef5140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'progress'
Dec 05 09:50:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:24.407+0000 7f68e1ef5140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'prometheus'
Dec 05 09:50:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:24 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:24 compute-0 sudo[101465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoobrxzxqqpgbnkebmvgtjxrvkzsudbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928224.2343647-129-65019078836710/AnsiballZ_stat.py'
Dec 05 09:50:24 compute-0 sudo[101465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:50:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:24.844+0000 7f68e1ef5140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rbd_support'
Dec 05 09:50:24 compute-0 python3.9[101467]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:50:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:24.949+0000 7f68e1ef5140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:50:24 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'restful'
Dec 05 09:50:24 compute-0 sudo[101465]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095025 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:50:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:25 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:25 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rgw'
Dec 05 09:50:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:25.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:25.462+0000 7f68e1ef5140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:50:25 compute-0 ceph-mgr[74711]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:50:25 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'rook'
Dec 05 09:50:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:25.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:26 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf640018b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:26.138+0000 7f68e1ef5140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'selftest'
Dec 05 09:50:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:26.219+0000 7f68e1ef5140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'snap_schedule'
Dec 05 09:50:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:26.314+0000 7f68e1ef5140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'stats'
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'status'
Dec 05 09:50:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:26.479+0000 7f68e1ef5140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telegraf'
Dec 05 09:50:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:26.550+0000 7f68e1ef5140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'telemetry'
Dec 05 09:50:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:26 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002d00 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:26.708+0000 7f68e1ef5140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'test_orchestrator'
Dec 05 09:50:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 05 09:50:26 compute-0 ceph-mon[74418]: from='mgr.14442 192.168.122.100:0/1976510507' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 05 09:50:26 compute-0 ceph-mon[74418]: osdmap e109: 3 total, 3 up, 3 in
Dec 05 09:50:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 05 09:50:26 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 05 09:50:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:26.962+0000 7f68e1ef5140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:50:26 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'volumes'
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:27 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:27.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:27.269+0000 7f68e1ef5140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr[py] Loading python module 'zabbix'
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:27.351+0000 7f68e1ef5140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Active manager daemon compute-0.hvnxai restarted
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hvnxai
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: ms_deliver_dispatch: unhandled message 0x5623d3f55860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr handle_mgr_map Activating!
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.hvnxai(active, starting, since 0.0358184s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr handle_mgr_map I am now activating
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.hfgtsk"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.hfgtsk"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e10 all = 0
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.qyxerc"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.qyxerc"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e10 all = 0
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.hxfsnw"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hxfsnw"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e10 all = 0
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e10 all = 1
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: balancer
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Manager daemon compute-0.hvnxai is now available
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Starting
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:50:27
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: cephadm
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: crash
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: dashboard
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO sso] Loading SSO DB version=1
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: devicehealth
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Starting
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: iostat
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: nfs
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: orchestrator
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: pg_autoscaler
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: progress
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [progress INFO root] Loading...
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f6867ac6c40>, <progress.module.GhostEvent object at 0x7f6867ac6e50>, <progress.module.GhostEvent object at 0x7f6867ac6e80>, <progress.module.GhostEvent object at 0x7f6867ac6eb0>, <progress.module.GhostEvent object at 0x7f6867ac6ee0>, <progress.module.GhostEvent object at 0x7f6867ac6f10>, <progress.module.GhostEvent object at 0x7f6867ac6f40>, <progress.module.GhostEvent object at 0x7f6867ac6f70>, <progress.module.GhostEvent object at 0x7f6867ac6fa0>, <progress.module.GhostEvent object at 0x7f6867ac6fd0>, <progress.module.GhostEvent object at 0x7f686023d040>, <progress.module.GhostEvent object at 0x7f686023d070>, <progress.module.GhostEvent object at 0x7f686023d0a0>, <progress.module.GhostEvent object at 0x7f686023d0d0>, <progress.module.GhostEvent object at 0x7f686023d100>, <progress.module.GhostEvent object at 0x7f686023d130>, <progress.module.GhostEvent object at 0x7f686023d160>, <progress.module.GhostEvent object at 0x7f686023d190>, <progress.module.GhostEvent object at 0x7f686023d1c0>, <progress.module.GhostEvent object at 0x7f686023d1f0>, <progress.module.GhostEvent object at 0x7f686023d220>, <progress.module.GhostEvent object at 0x7f686023d250>, <progress.module.GhostEvent object at 0x7f686023d280>, <progress.module.GhostEvent object at 0x7f686023d2b0>, <progress.module.GhostEvent object at 0x7f686023d2e0>, <progress.module.GhostEvent object at 0x7f686023d310>, <progress.module.GhostEvent object at 0x7f686023d340>] historic events
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [progress INFO root] Loaded OSDMap, ready.
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: prometheus
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [prometheus INFO root] Cache enabled
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [prometheus INFO root] starting metric collection thread
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [prometheus INFO root] Starting engine...
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: [05/Dec/2025:09:50:27] ENGINE Bus STARTING
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.error] [05/Dec/2025:09:50:27] ENGINE Bus STARTING
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: CherryPy Checker:
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: The Application mounted at '' has an empty config.
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] recovery thread starting
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] starting setup
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: rbd_support
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: restful
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [restful INFO root] server_addr: :: server_port: 8003
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [restful WARNING root] server not running: no certificate configured
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: status
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: telemetry
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:50:27 compute-0 sudo[101726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkgloblfpgtusrakjbztofmpodescino ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928227.1228178-162-164133776946111/AnsiballZ_file.py'
Dec 05 09:50:27 compute-0 sudo[101726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] PerfHandler: starting
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: mgr load Constructed class from module: volumes
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: [05/Dec/2025:09:50:27] ENGINE Serving on http://:::9283
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:27.602+0000 7f6849e00640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.error] [05/Dec/2025:09:50:27] ENGINE Serving on http://:::9283
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: [05/Dec/2025:09:50:27] ENGINE Bus STARTED
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.error] [05/Dec/2025:09:50:27] ENGINE Bus STARTED
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [prometheus INFO root] Engine started.
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:27.604+0000 7f6845df8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:27.604+0000 7f6845df8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:27.604+0000 7f6845df8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:27.604+0000 7f6845df8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T09:50:27.604+0000 7f6845df8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TaskHandler: starting
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"} v 0)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] setup complete
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unhddt restarted
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unhddt started
Dec 05 09:50:27 compute-0 sshd-session[101780]: Accepted publickey for ceph-admin from 192.168.122.100 port 36210 ssh2: RSA SHA256:MxBkUB4+lcwSNDaUavEN0XQWfXuGmKDyxiueeVUwNsk
Dec 05 09:50:27 compute-0 python3.9[101735]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:50:27 compute-0 systemd-logind[789]: New session 39 of user ceph-admin.
Dec 05 09:50:27 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Dec 05 09:50:27 compute-0 sshd-session[101780]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 09:50:27 compute-0 sudo[101726]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 05 09:50:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 05 09:50:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:27.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 05 09:50:27 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 05 09:50:27 compute-0 ceph-mon[74418]: osdmap e110: 3 total, 3 up, 3 in
Dec 05 09:50:27 compute-0 ceph-mon[74418]: Active manager daemon compute-0.hvnxai restarted
Dec 05 09:50:27 compute-0 ceph-mon[74418]: Activating manager daemon compute-0.hvnxai
Dec 05 09:50:27 compute-0 ceph-mon[74418]: osdmap e111: 3 total, 3 up, 3 in
Dec 05 09:50:27 compute-0 ceph-mon[74418]: mgrmap e26: compute-0.hvnxai(active, starting, since 0.0358184s), standbys: compute-1.unhddt, compute-2.wewrgp
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.hfgtsk"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.qyxerc"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hxfsnw"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hvnxai", "id": "compute-0.hvnxai"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unhddt", "id": "compute-1.unhddt"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr metadata", "who": "compute-2.wewrgp", "id": "compute-2.wewrgp"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: Manager daemon compute-0.hvnxai is now available
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/mirror_snapshot_schedule"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hvnxai/trash_purge_schedule"}]: dispatch
Dec 05 09:50:27 compute-0 ceph-mon[74418]: Standby manager daemon compute-1.unhddt restarted
Dec 05 09:50:27 compute-0 ceph-mon[74418]: Standby manager daemon compute-1.unhddt started
Dec 05 09:50:27 compute-0 sudo[101810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:27 compute-0 sudo[101810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.wewrgp restarted
Dec 05 09:50:27 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.wewrgp started
Dec 05 09:50:27 compute-0 sudo[101810]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:28 compute-0 sudo[101845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 09:50:28 compute-0 sudo[101845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:28 compute-0 ceph-mgr[74711]: [dashboard INFO dashboard.module] Engine started.
Dec 05 09:50:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:28 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:28 compute-0 sudo[102017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asmpdpcnmzdmsubjvycdltsdwtquuuzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928228.1132572-189-241356357941080/AnsiballZ_file.py'
Dec 05 09:50:28 compute-0 sudo[102017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:50:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 05 09:50:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.hvnxai(active, since 1.06062s), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:50:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 05 09:50:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 05 09:50:28 compute-0 python3.9[102020]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:50:28 compute-0 sudo[102017]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:28 compute-0 podman[102067]: 2025-12-05 09:50:28.656368371 +0000 UTC m=+0.113538402 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 09:50:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:28 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:28 compute-0 podman[102067]: 2025-12-05 09:50:28.756559473 +0000 UTC m=+0.213729504 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 09:50:28 compute-0 ceph-mon[74418]: Standby manager daemon compute-2.wewrgp restarted
Dec 05 09:50:28 compute-0 ceph-mon[74418]: Standby manager daemon compute-2.wewrgp started
Dec 05 09:50:28 compute-0 ceph-mon[74418]: mgrmap e27: compute-0.hvnxai(active, since 1.06062s), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:50:28 compute-0 ceph-mon[74418]: osdmap e112: 3 total, 3 up, 3 in
Dec 05 09:50:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:29 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:50:29] ENGINE Bus STARTING
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:50:29] ENGINE Bus STARTING
Dec 05 09:50:29 compute-0 podman[102283]: 2025-12-05 09:50:29.241676247 +0000 UTC m=+0.062502037 container exec dc2521f476ac6cd8b02d9a95c2d20034aa296ae30c8ddb7ef7e3087931bef2ec (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:29.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:50:29] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:50:29] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:50:29 compute-0 podman[102283]: 2025-12-05 09:50:29.279668441 +0000 UTC m=+0.100494241 container exec_died dc2521f476ac6cd8b02d9a95c2d20034aa296ae30c8ddb7ef7e3087931bef2ec (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v5: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec 05 09:50:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:50:29] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:50:29] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:50:29] ENGINE Bus STARTED
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:50:29] ENGINE Bus STARTED
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:50:29] ENGINE Client ('192.168.122.100', 56106) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:50:29] ENGINE Client ('192.168.122.100', 56106) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:50:29 compute-0 python3.9[102378]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:50:29 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Check health
Dec 05 09:50:29 compute-0 network[102449]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:50:29 compute-0 network[102450]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:50:29 compute-0 network[102451]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:50:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 05 09:50:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 05 09:50:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 05 09:50:29 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 05 09:50:29 compute-0 podman[102481]: 2025-12-05 09:50:29.774790287 +0000 UTC m=+0.069705705 container exec d1ea233284d0d310cc076ca9ad62473a1bc421943ae196b1f9584786262f3156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:50:29 compute-0 podman[102481]: 2025-12-05 09:50:29.787655893 +0000 UTC m=+0.082571291 container exec_died d1ea233284d0d310cc076ca9ad62473a1bc421943ae196b1f9584786262f3156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:50:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:29.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:29 compute-0 ceph-mon[74418]: [05/Dec/2025:09:50:29] ENGINE Bus STARTING
Dec 05 09:50:29 compute-0 ceph-mon[74418]: [05/Dec/2025:09:50:29] ENGINE Serving on http://192.168.122.100:8765
Dec 05 09:50:29 compute-0 ceph-mon[74418]: pgmap v5: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 05 09:50:29 compute-0 ceph-mon[74418]: [05/Dec/2025:09:50:29] ENGINE Serving on https://192.168.122.100:7150
Dec 05 09:50:29 compute-0 ceph-mon[74418]: [05/Dec/2025:09:50:29] ENGINE Bus STARTED
Dec 05 09:50:29 compute-0 ceph-mon[74418]: [05/Dec/2025:09:50:29] ENGINE Client ('192.168.122.100', 56106) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:50:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 05 09:50:29 compute-0 ceph-mon[74418]: osdmap e113: 3 total, 3 up, 3 in
Dec 05 09:50:29 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.hvnxai(active, since 2s), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:50:29 compute-0 podman[102547]: 2025-12-05 09:50:29.990265796 +0000 UTC m=+0.049145738 container exec d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 09:50:30 compute-0 podman[102547]: 2025-12-05 09:50:30.003533412 +0000 UTC m=+0.062413324 container exec_died d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 09:50:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:30 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:30 compute-0 podman[102614]: 2025-12-05 09:50:30.19910546 +0000 UTC m=+0.047485284 container exec f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.openshift.expose-services=, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Dec 05 09:50:30 compute-0 podman[102614]: 2025-12-05 09:50:30.213838575 +0000 UTC m=+0.062218359 container exec_died f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, name=keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, version=2.2.4, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.expose-services=, release=1793, architecture=x86_64)
Dec 05 09:50:30 compute-0 podman[102681]: 2025-12-05 09:50:30.446772191 +0000 UTC m=+0.053391758 container exec aa11c6973d139c2e9bb6746f25caf931656607e7034cefb81d97cc477f867cd1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:30 compute-0 podman[102681]: 2025-12-05 09:50:30.488635656 +0000 UTC m=+0.095255183 container exec_died aa11c6973d139c2e9bb6746f25caf931656607e7034cefb81d97cc477f867cd1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:30 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:31 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:50:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:31.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v7: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec 05 09:50:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 05 09:50:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 05 09:50:31 compute-0 ceph-mon[74418]: mgrmap e28: compute-0.hvnxai(active, since 2s), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:50:31 compute-0 podman[102762]: 2025-12-05 09:50:31.646268918 +0000 UTC m=+0.195782914 container exec bfc89c7b51db319a90bd517ef6d4861794d073950d7be4a9d66708be3b568f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 05 09:50:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 05 09:50:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:31 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 05 09:50:31 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.hvnxai(active, since 4s), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:50:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:50:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:50:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:31.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:31 compute-0 podman[102762]: 2025-12-05 09:50:31.906697313 +0000 UTC m=+0.456211279 container exec_died bfc89c7b51db319a90bd517ef6d4861794d073950d7be4a9d66708be3b568f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:32 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:32 compute-0 podman[102919]: 2025-12-05 09:50:32.367279894 +0000 UTC m=+0.081669948 container exec 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:50:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:50:32 compute-0 podman[102919]: 2025-12-05 09:50:32.445665045 +0000 UTC m=+0.160055049 container exec_died 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:32 compute-0 sudo[101845]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:32 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:50:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:33 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:33 compute-0 ceph-mon[74418]: pgmap v7: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:33 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 05 09:50:33 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 05 09:50:33 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:33 compute-0 ceph-mon[74418]: osdmap e114: 3 total, 3 up, 3 in
Dec 05 09:50:33 compute-0 ceph-mon[74418]: mgrmap e29: compute-0.hvnxai(active, since 4s), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:50:33 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:33 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec 05 09:50:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:33.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 05 09:50:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v9: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:33.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec 05 09:50:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 05 09:50:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:34 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 05 09:50:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:34 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 05 09:50:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 05 09:50:34 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:34 compute-0 ceph-mon[74418]: pgmap v9: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:34 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 05 09:50:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:34 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 05 09:50:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:50:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:50:34 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=85/85 les/c/f=86/86/0 sis=115) [1] r=0 lpr=115 pi=[85,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:50:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 05 09:50:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 09:50:34 compute-0 sudo[103112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:34 compute-0 sudo[103112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:34 compute-0 sudo[103112]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:50:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 05 09:50:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:50:34 compute-0 sudo[103154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:50:35 compute-0 sudo[103154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:35 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:35 compute-0 python3.9[103216]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:50:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:35.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v11: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 05 09:50:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec 05 09:50:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 05 09:50:35 compute-0 sudo[103154]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:35 compute-0 sudo[103322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:35 compute-0 sudo[103322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:35 compute-0 sudo[103322]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:35 compute-0 sudo[103370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 05 09:50:35 compute-0 sudo[103370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:50:35] "GET /metrics HTTP/1.1" 200 46663 "" "Prometheus/2.51.0"
Dec 05 09:50:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:50:35] "GET /metrics HTTP/1.1" 200 46663 "" "Prometheus/2.51.0"
Dec 05 09:50:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec 05 09:50:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 05 09:50:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:35 compute-0 ceph-mon[74418]: osdmap e115: 3 total, 3 up, 3 in
Dec 05 09:50:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 09:50:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 09:50:35 compute-0 ceph-mon[74418]: pgmap v11: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 05 09:50:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 05 09:50:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:35.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 05 09:50:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec 05 09:50:35 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec 05 09:50:35 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=85/85 les/c/f=86/86/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[85,116)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:35 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=85/85 les/c/f=86/86/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[85,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:50:35 compute-0 sudo[103370]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:36 compute-0 python3.9[103445]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:50:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:36 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:36 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:50:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec 05 09:50:36 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 116 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=86/86 les/c/f=87/87/0 sis=116) [1] r=0 lpr=116 pi=[86,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:50:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 05 09:50:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 09:50:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:50:36 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:50:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:50:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec 05 09:50:36 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 05 09:50:36 compute-0 ceph-mon[74418]: osdmap e116: 3 total, 3 up, 3 in
Dec 05 09:50:36 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:36 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec 05 09:50:36 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:50:36 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:50:36 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:50:36 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:50:36 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:50:36 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:50:37 compute-0 sudo[103569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:50:37 compute-0 sudo[103569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103569]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 sudo[103598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph
Dec 05 09:50:37 compute-0 sudo[103598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103598]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 sudo[103649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:50:37 compute-0 sudo[103649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103649]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:37 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:37 compute-0 sudo[103695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:37 compute-0 sudo[103695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103695]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 sudo[103720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:50:37 compute-0 sudo[103720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103720]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:37.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:37 compute-0 sudo[103768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:50:37 compute-0 sudo[103768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103768]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 sudo[103793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new
Dec 05 09:50:37 compute-0 sudo[103793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103793]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:37 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:50:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v14: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 12 op/s
Dec 05 09:50:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec 05 09:50:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 05 09:50:37 compute-0 python3.9[103692]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:50:37 compute-0 sudo[103818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:50:37 compute-0 sudo[103818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103818]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:50:37 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:50:37 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:50:37 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:50:37 compute-0 sudo[103847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:50:37 compute-0 sudo[103847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103847]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:50:37 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:50:37 compute-0 sudo[103872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:50:37 compute-0 sudo[103872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103872]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 sudo[103897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:50:37 compute-0 sudo[103897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103897]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 sudo[103930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:37 compute-0 sudo[103930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103930]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 sudo[103973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:50:37 compute-0 sudo[103973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[103973]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:37.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:37 compute-0 sudo[104021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:50:37 compute-0 sudo[104021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[104021]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 sudo[104046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new
Dec 05 09:50:37 compute-0 sudo[104046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[104046]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec 05 09:50:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 05 09:50:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec 05 09:50:37 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec 05 09:50:37 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 118 pg[9.19( v 52'1029 (0'0,52'1029] local-lis/les=0/0 n=7 ec=55/40 lis/c=116/85 les/c/f=117/86/0 sis=118) [1] r=0 lpr=118 pi=[85,118)/1 luod=0'0 crt=52'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:37 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 118 pg[9.19( v 52'1029 (0'0,52'1029] local-lis/les=0/0 n=7 ec=55/40 lis/c=116/85 les/c/f=117/86/0 sis=118) [1] r=0 lpr=118 pi=[85,118)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:50:37 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=86/86 les/c/f=87/87/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[86,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:37 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=86/86 les/c/f=87/87/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[86,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:50:37 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 118 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=67/67 les/c/f=68/68/0 sis=118) [1] r=0 lpr=118 pi=[67,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:50:37 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:37 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 09:50:37 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:37 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:50:37 compute-0 ceph-mon[74418]: osdmap e117: 3 total, 3 up, 3 in
Dec 05 09:50:37 compute-0 ceph-mon[74418]: Updating compute-0:/etc/ceph/ceph.conf
Dec 05 09:50:37 compute-0 ceph-mon[74418]: Updating compute-1:/etc/ceph/ceph.conf
Dec 05 09:50:37 compute-0 ceph-mon[74418]: Updating compute-2:/etc/ceph/ceph.conf
Dec 05 09:50:37 compute-0 ceph-mon[74418]: pgmap v14: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 12 op/s
Dec 05 09:50:37 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 05 09:50:37 compute-0 sudo[104071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf.new /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:50:37 compute-0 sudo[104071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:37 compute-0 sudo[104071]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:37 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:50:37 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 sudo[104097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:50:38 compute-0 sudo[104097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104097]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 sudo[104145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph
Dec 05 09:50:38 compute-0 sudo[104145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104145]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:38 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:38 compute-0 sudo[104199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:50:38 compute-0 sudo[104199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104199]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 sudo[104247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:38 compute-0 sudo[104247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104247]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 sudo[104294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:50:38 compute-0 sudo[104294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104294]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 sudo[104350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dziqdwygrwqlhytfdbwuewvrxdvwydts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928238.0450213-333-55670595169439/AnsiballZ_setup.py'
Dec 05 09:50:38 compute-0 sudo[104350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:50:38 compute-0 sudo[104374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:50:38 compute-0 sudo[104374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104374]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 sudo[104399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:50:38 compute-0 sudo[104399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104399]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 sudo[104424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 sudo[104424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104424]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 sudo[104449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:50:38 compute-0 sudo[104449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104449]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 python3.9[104370]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:50:38 compute-0 sudo[104474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config
Dec 05 09:50:38 compute-0 sudo[104474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104474]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 sudo[104506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:50:38 compute-0 sudo[104506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104506]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:38 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:38 compute-0 sudo[104531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:38 compute-0 sudo[104531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104531]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 sudo[104556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:50:38 compute-0 sudo[104556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104556]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 sudo[104350]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 sudo[104605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:50:38 compute-0 sudo[104605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104605]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 sudo[104630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new
Dec 05 09:50:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec 05 09:50:38 compute-0 sudo[104630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104630]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec 05 09:50:38 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec 05 09:50:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=67/67 les/c/f=68/68/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[67,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=67/67 les/c/f=68/68/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[67,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:50:38 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 119 pg[9.19( v 52'1029 (0'0,52'1029] local-lis/les=118/119 n=7 ec=55/40 lis/c=116/85 les/c/f=117/86/0 sis=118) [1] r=0 lpr=118 pi=[85,118)/1 crt=52'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:50:38 compute-0 ceph-mon[74418]: Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:50:38 compute-0 ceph-mon[74418]: Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:50:38 compute-0 ceph-mon[74418]: Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.conf
Dec 05 09:50:38 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 05 09:50:38 compute-0 ceph-mon[74418]: osdmap e118: 3 total, 3 up, 3 in
Dec 05 09:50:38 compute-0 ceph-mon[74418]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 ceph-mon[74418]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 ceph-mon[74418]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 ceph-mon[74418]: osdmap e119: 3 total, 3 up, 3 in
Dec 05 09:50:38 compute-0 sudo[104655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3c63ce0f-5206-59ae-8381-b67d0b6424b5/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring.new /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:50:38 compute-0 sudo[104655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:38 compute-0 sudo[104655]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:39 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:39 compute-0 sudo[104753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkpatojcjxabdhooztaszncbhqkmmivc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928238.0450213-333-55670595169439/AnsiballZ_dnf.py'
Dec 05 09:50:39 compute-0 sudo[104753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:50:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:39.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v17: 353 pgs: 1 unknown, 1 remapped+peering, 1 peering, 350 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.8 KiB/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:39 compute-0 python3.9[104755]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:50:39 compute-0 sudo[104756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:39 compute-0 sudo[104756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:39 compute-0 sudo[104756]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:39 compute-0 sudo[104782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:50:39 compute-0 sudo[104782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:39.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec 05 09:50:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec 05 09:50:39 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec 05 09:50:39 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 120 pg[9.1a( v 52'1029 (0'0,52'1029] local-lis/les=0/0 n=4 ec=55/40 lis/c=118/86 les/c/f=119/87/0 sis=120) [1] r=0 lpr=120 pi=[86,120)/1 luod=0'0 crt=52'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:39 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 120 pg[9.1a( v 52'1029 (0'0,52'1029] local-lis/les=0/0 n=4 ec=55/40 lis/c=118/86 les/c/f=119/87/0 sis=120) [1] r=0 lpr=120 pi=[86,120)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:50:40 compute-0 ceph-mon[74418]: Updating compute-0:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:50:40 compute-0 ceph-mon[74418]: Updating compute-1:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:50:40 compute-0 ceph-mon[74418]: Updating compute-2:/var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/config/ceph.client.admin.keyring
Dec 05 09:50:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:40 compute-0 ceph-mon[74418]: pgmap v17: 353 pgs: 1 unknown, 1 remapped+peering, 1 peering, 350 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.8 KiB/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Dec 05 09:50:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:50:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:50:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:40 compute-0 ceph-mon[74418]: osdmap e120: 3 total, 3 up, 3 in
Dec 05 09:50:40 compute-0 podman[104859]: 2025-12-05 09:50:40.061401355 +0000 UTC m=+0.043761987 container create 0141f66d4948466e605ee043bd55822ec882b1addd5b49e9bec0be36e802195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cartwright, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:50:40 compute-0 systemd[1]: Started libpod-conmon-0141f66d4948466e605ee043bd55822ec882b1addd5b49e9bec0be36e802195f.scope.
Dec 05 09:50:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:40 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf78004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:40 compute-0 podman[104859]: 2025-12-05 09:50:40.13653398 +0000 UTC m=+0.118894662 container init 0141f66d4948466e605ee043bd55822ec882b1addd5b49e9bec0be36e802195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 05 09:50:40 compute-0 podman[104859]: 2025-12-05 09:50:40.041537465 +0000 UTC m=+0.023898117 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:50:40 compute-0 podman[104859]: 2025-12-05 09:50:40.143554594 +0000 UTC m=+0.125915226 container start 0141f66d4948466e605ee043bd55822ec882b1addd5b49e9bec0be36e802195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:50:40 compute-0 epic_cartwright[104878]: 167 167
Dec 05 09:50:40 compute-0 systemd[1]: libpod-0141f66d4948466e605ee043bd55822ec882b1addd5b49e9bec0be36e802195f.scope: Deactivated successfully.
Dec 05 09:50:40 compute-0 podman[104859]: 2025-12-05 09:50:40.150123816 +0000 UTC m=+0.132484458 container attach 0141f66d4948466e605ee043bd55822ec882b1addd5b49e9bec0be36e802195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cartwright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 09:50:40 compute-0 podman[104859]: 2025-12-05 09:50:40.150771664 +0000 UTC m=+0.133132316 container died 0141f66d4948466e605ee043bd55822ec882b1addd5b49e9bec0be36e802195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Dec 05 09:50:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c5802df6c29bdbdb2b9cb8e4118bb4db29c65c46f479dedc633f3d73fa2ad04-merged.mount: Deactivated successfully.
Dec 05 09:50:40 compute-0 podman[104859]: 2025-12-05 09:50:40.194509187 +0000 UTC m=+0.176869819 container remove 0141f66d4948466e605ee043bd55822ec882b1addd5b49e9bec0be36e802195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:50:40 compute-0 systemd[1]: libpod-conmon-0141f66d4948466e605ee043bd55822ec882b1addd5b49e9bec0be36e802195f.scope: Deactivated successfully.
Dec 05 09:50:40 compute-0 podman[104909]: 2025-12-05 09:50:40.385756311 +0000 UTC m=+0.052399671 container create 45c8d7be10fe39a0824b51517bf2ee7a3e551e0510a3fbe1976b0b51e8925e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swirles, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:50:40 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec 05 09:50:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:40 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:50:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:40 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:50:40 compute-0 systemd[1]: Started libpod-conmon-45c8d7be10fe39a0824b51517bf2ee7a3e551e0510a3fbe1976b0b51e8925e3a.scope.
Dec 05 09:50:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:40 compute-0 podman[104909]: 2025-12-05 09:50:40.369168048 +0000 UTC m=+0.035811428 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dca2bc3d646d4cf89f3e7ac35af4e6aacedf6b6d090840f5680cf85261dccb67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dca2bc3d646d4cf89f3e7ac35af4e6aacedf6b6d090840f5680cf85261dccb67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dca2bc3d646d4cf89f3e7ac35af4e6aacedf6b6d090840f5680cf85261dccb67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dca2bc3d646d4cf89f3e7ac35af4e6aacedf6b6d090840f5680cf85261dccb67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dca2bc3d646d4cf89f3e7ac35af4e6aacedf6b6d090840f5680cf85261dccb67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:40 compute-0 podman[104909]: 2025-12-05 09:50:40.488937532 +0000 UTC m=+0.155580902 container init 45c8d7be10fe39a0824b51517bf2ee7a3e551e0510a3fbe1976b0b51e8925e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 09:50:40 compute-0 podman[104909]: 2025-12-05 09:50:40.495028791 +0000 UTC m=+0.161672171 container start 45c8d7be10fe39a0824b51517bf2ee7a3e551e0510a3fbe1976b0b51e8925e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 09:50:40 compute-0 podman[104909]: 2025-12-05 09:50:40.49803929 +0000 UTC m=+0.164682670 container attach 45c8d7be10fe39a0824b51517bf2ee7a3e551e0510a3fbe1976b0b51e8925e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:50:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:40 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:40 compute-0 romantic_swirles[104925]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:50:40 compute-0 romantic_swirles[104925]: --> All data devices are unavailable
Dec 05 09:50:40 compute-0 systemd[1]: libpod-45c8d7be10fe39a0824b51517bf2ee7a3e551e0510a3fbe1976b0b51e8925e3a.scope: Deactivated successfully.
Dec 05 09:50:40 compute-0 podman[104909]: 2025-12-05 09:50:40.844218628 +0000 UTC m=+0.510861988 container died 45c8d7be10fe39a0824b51517bf2ee7a3e551e0510a3fbe1976b0b51e8925e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 09:50:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-dca2bc3d646d4cf89f3e7ac35af4e6aacedf6b6d090840f5680cf85261dccb67-merged.mount: Deactivated successfully.
Dec 05 09:50:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec 05 09:50:40 compute-0 podman[104909]: 2025-12-05 09:50:40.967772911 +0000 UTC m=+0.634416271 container remove 45c8d7be10fe39a0824b51517bf2ee7a3e551e0510a3fbe1976b0b51e8925e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swirles, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:50:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec 05 09:50:40 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec 05 09:50:40 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 121 pg[9.1b( v 52'1029 (0'0,52'1029] local-lis/les=0/0 n=2 ec=55/40 lis/c=119/67 les/c/f=120/68/0 sis=121) [1] r=0 lpr=121 pi=[67,121)/1 luod=0'0 crt=52'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:40 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 121 pg[9.1b( v 52'1029 (0'0,52'1029] local-lis/les=0/0 n=2 ec=55/40 lis/c=119/67 les/c/f=120/68/0 sis=121) [1] r=0 lpr=121 pi=[67,121)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:50:40 compute-0 systemd[1]: libpod-conmon-45c8d7be10fe39a0824b51517bf2ee7a3e551e0510a3fbe1976b0b51e8925e3a.scope: Deactivated successfully.
Dec 05 09:50:40 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 121 pg[9.1a( v 52'1029 (0'0,52'1029] local-lis/les=120/121 n=4 ec=55/40 lis/c=118/86 les/c/f=119/87/0 sis=120) [1] r=0 lpr=120 pi=[86,120)/1 crt=52'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:50:41 compute-0 sudo[104782]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:41 compute-0 sudo[104970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:41 compute-0 sudo[104970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:41 compute-0 sudo[104970]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:41 compute-0 sudo[104998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:50:41 compute-0 sudo[104998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:41 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec 05 09:50:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:41.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 05 09:50:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v20: 353 pgs: 1 unknown, 1 remapped+peering, 1 peering, 350 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.8 KiB/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Dec 05 09:50:41 compute-0 podman[105061]: 2025-12-05 09:50:41.628744167 +0000 UTC m=+0.022030358 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:50:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:41.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:42 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:50:42 compute-0 podman[105061]: 2025-12-05 09:50:42.120749422 +0000 UTC m=+0.514035593 container create ef27d5be6bf441b4ddbb808a573096d1ebfdb9dc5cebbcf4cc2653fe38022af7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaplygin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:50:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec 05 09:50:42 compute-0 ceph-mon[74418]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec 05 09:50:42 compute-0 ceph-mon[74418]: osdmap e121: 3 total, 3 up, 3 in
Dec 05 09:50:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec 05 09:50:42 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 05 09:50:42 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 122 pg[9.1b( v 52'1029 (0'0,52'1029] local-lis/les=121/122 n=2 ec=55/40 lis/c=119/67 les/c/f=120/68/0 sis=121) [1] r=0 lpr=121 pi=[67,121)/1 crt=52'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:50:42 compute-0 systemd[1]: Started libpod-conmon-ef27d5be6bf441b4ddbb808a573096d1ebfdb9dc5cebbcf4cc2653fe38022af7.scope.
Dec 05 09:50:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:42 compute-0 podman[105061]: 2025-12-05 09:50:42.217053421 +0000 UTC m=+0.610339622 container init ef27d5be6bf441b4ddbb808a573096d1ebfdb9dc5cebbcf4cc2653fe38022af7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Dec 05 09:50:42 compute-0 podman[105061]: 2025-12-05 09:50:42.224188508 +0000 UTC m=+0.617474679 container start ef27d5be6bf441b4ddbb808a573096d1ebfdb9dc5cebbcf4cc2653fe38022af7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:50:42 compute-0 podman[105061]: 2025-12-05 09:50:42.228449479 +0000 UTC m=+0.621735710 container attach ef27d5be6bf441b4ddbb808a573096d1ebfdb9dc5cebbcf4cc2653fe38022af7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaplygin, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:50:42 compute-0 clever_chaplygin[105079]: 167 167
Dec 05 09:50:42 compute-0 systemd[1]: libpod-ef27d5be6bf441b4ddbb808a573096d1ebfdb9dc5cebbcf4cc2653fe38022af7.scope: Deactivated successfully.
Dec 05 09:50:42 compute-0 podman[105061]: 2025-12-05 09:50:42.22959667 +0000 UTC m=+0.622882841 container died ef27d5be6bf441b4ddbb808a573096d1ebfdb9dc5cebbcf4cc2653fe38022af7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 09:50:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4289c48f0161bfad1a0aca094aa97809792d96b98fefc20c94dc15cee77e479a-merged.mount: Deactivated successfully.
Dec 05 09:50:42 compute-0 podman[105061]: 2025-12-05 09:50:42.271301121 +0000 UTC m=+0.664587292 container remove ef27d5be6bf441b4ddbb808a573096d1ebfdb9dc5cebbcf4cc2653fe38022af7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:50:42 compute-0 systemd[1]: libpod-conmon-ef27d5be6bf441b4ddbb808a573096d1ebfdb9dc5cebbcf4cc2653fe38022af7.scope: Deactivated successfully.
Dec 05 09:50:42 compute-0 podman[105108]: 2025-12-05 09:50:42.421544082 +0000 UTC m=+0.045407279 container create f5d0d05d1855c619650f3d3f54e9ed1ae5f7bc5b0ca8e9ed550c29cd740be9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:50:42 compute-0 systemd[1]: Started libpod-conmon-f5d0d05d1855c619650f3d3f54e9ed1ae5f7bc5b0ca8e9ed550c29cd740be9f4.scope.
Dec 05 09:50:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/207b8cd07ecdf306b7c2abac73db1889356b4c386337a7871e3918e062a33d60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/207b8cd07ecdf306b7c2abac73db1889356b4c386337a7871e3918e062a33d60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/207b8cd07ecdf306b7c2abac73db1889356b4c386337a7871e3918e062a33d60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/207b8cd07ecdf306b7c2abac73db1889356b4c386337a7871e3918e062a33d60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:42 compute-0 podman[105108]: 2025-12-05 09:50:42.403764277 +0000 UTC m=+0.027627494 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:50:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec 05 09:50:42 compute-0 podman[105108]: 2025-12-05 09:50:42.502943072 +0000 UTC m=+0.126806289 container init f5d0d05d1855c619650f3d3f54e9ed1ae5f7bc5b0ca8e9ed550c29cd740be9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 05 09:50:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:50:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:50:42 compute-0 podman[105108]: 2025-12-05 09:50:42.515702366 +0000 UTC m=+0.139565603 container start f5d0d05d1855c619650f3d3f54e9ed1ae5f7bc5b0ca8e9ed550c29cd740be9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 09:50:42 compute-0 podman[105108]: 2025-12-05 09:50:42.519604618 +0000 UTC m=+0.143467855 container attach f5d0d05d1855c619650f3d3f54e9ed1ae5f7bc5b0ca8e9ed550c29cd740be9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_edison, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:50:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:42 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:42 compute-0 epic_edison[105128]: {
Dec 05 09:50:42 compute-0 epic_edison[105128]:     "1": [
Dec 05 09:50:42 compute-0 epic_edison[105128]:         {
Dec 05 09:50:42 compute-0 epic_edison[105128]:             "devices": [
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "/dev/loop3"
Dec 05 09:50:42 compute-0 epic_edison[105128]:             ],
Dec 05 09:50:42 compute-0 epic_edison[105128]:             "lv_name": "ceph_lv0",
Dec 05 09:50:42 compute-0 epic_edison[105128]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:50:42 compute-0 epic_edison[105128]:             "lv_size": "21470642176",
Dec 05 09:50:42 compute-0 epic_edison[105128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:50:42 compute-0 epic_edison[105128]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:50:42 compute-0 epic_edison[105128]:             "name": "ceph_lv0",
Dec 05 09:50:42 compute-0 epic_edison[105128]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:50:42 compute-0 epic_edison[105128]:             "tags": {
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.cluster_name": "ceph",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.crush_device_class": "",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.encrypted": "0",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.osd_id": "1",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.type": "block",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.vdo": "0",
Dec 05 09:50:42 compute-0 epic_edison[105128]:                 "ceph.with_tpm": "0"
Dec 05 09:50:42 compute-0 epic_edison[105128]:             },
Dec 05 09:50:42 compute-0 epic_edison[105128]:             "type": "block",
Dec 05 09:50:42 compute-0 epic_edison[105128]:             "vg_name": "ceph_vg0"
Dec 05 09:50:42 compute-0 epic_edison[105128]:         }
Dec 05 09:50:42 compute-0 epic_edison[105128]:     ]
Dec 05 09:50:42 compute-0 epic_edison[105128]: }
Dec 05 09:50:42 compute-0 systemd[1]: libpod-f5d0d05d1855c619650f3d3f54e9ed1ae5f7bc5b0ca8e9ed550c29cd740be9f4.scope: Deactivated successfully.
Dec 05 09:50:42 compute-0 conmon[105128]: conmon f5d0d05d1855c619650f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5d0d05d1855c619650f3d3f54e9ed1ae5f7bc5b0ca8e9ed550c29cd740be9f4.scope/container/memory.events
Dec 05 09:50:42 compute-0 podman[105108]: 2025-12-05 09:50:42.828766868 +0000 UTC m=+0.452630065 container died f5d0d05d1855c619650f3d3f54e9ed1ae5f7bc5b0ca8e9ed550c29cd740be9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 05 09:50:43 compute-0 ceph-mon[74418]: pgmap v20: 353 pgs: 1 unknown, 1 remapped+peering, 1 peering, 350 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.8 KiB/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Dec 05 09:50:43 compute-0 ceph-mon[74418]: osdmap e122: 3 total, 3 up, 3 in
Dec 05 09:50:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-207b8cd07ecdf306b7c2abac73db1889356b4c386337a7871e3918e062a33d60-merged.mount: Deactivated successfully.
Dec 05 09:50:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:43 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:43 compute-0 podman[105108]: 2025-12-05 09:50:43.161388061 +0000 UTC m=+0.785251258 container remove f5d0d05d1855c619650f3d3f54e9ed1ae5f7bc5b0ca8e9ed550c29cd740be9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 09:50:43 compute-0 sudo[104998]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:43 compute-0 systemd[1]: libpod-conmon-f5d0d05d1855c619650f3d3f54e9ed1ae5f7bc5b0ca8e9ed550c29cd740be9f4.scope: Deactivated successfully.
Dec 05 09:50:43 compute-0 sudo[105162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:43 compute-0 sudo[105162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:43 compute-0 sudo[105162]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:43.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:43 compute-0 sudo[105187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:50:43 compute-0 sudo[105187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v22: 353 pgs: 1 unknown, 1 remapped+peering, 1 peering, 350 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:43 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 09:50:43 compute-0 podman[105253]: 2025-12-05 09:50:43.705098998 +0000 UTC m=+0.030957481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:50:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:43.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:43 compute-0 podman[105253]: 2025-12-05 09:50:43.879341928 +0000 UTC m=+0.205200431 container create 48444998c691d2133f2f212a5277d45d0186862d11a39bcac8b245227f81faae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mccarthy, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:50:43 compute-0 systemd[1]: Started libpod-conmon-48444998c691d2133f2f212a5277d45d0186862d11a39bcac8b245227f81faae.scope.
Dec 05 09:50:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:44 compute-0 podman[105253]: 2025-12-05 09:50:44.002950443 +0000 UTC m=+0.328808926 container init 48444998c691d2133f2f212a5277d45d0186862d11a39bcac8b245227f81faae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mccarthy, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:50:44 compute-0 podman[105253]: 2025-12-05 09:50:44.013783046 +0000 UTC m=+0.339641509 container start 48444998c691d2133f2f212a5277d45d0186862d11a39bcac8b245227f81faae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mccarthy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 09:50:44 compute-0 podman[105253]: 2025-12-05 09:50:44.017898404 +0000 UTC m=+0.343756867 container attach 48444998c691d2133f2f212a5277d45d0186862d11a39bcac8b245227f81faae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 09:50:44 compute-0 tender_mccarthy[105273]: 167 167
Dec 05 09:50:44 compute-0 systemd[1]: libpod-48444998c691d2133f2f212a5277d45d0186862d11a39bcac8b245227f81faae.scope: Deactivated successfully.
Dec 05 09:50:44 compute-0 podman[105253]: 2025-12-05 09:50:44.023263964 +0000 UTC m=+0.349122447 container died 48444998c691d2133f2f212a5277d45d0186862d11a39bcac8b245227f81faae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 09:50:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a142b6d75767c85241da22374dc9e3234f9cfa46db2f1c95ff2d0cd5bf82057-merged.mount: Deactivated successfully.
Dec 05 09:50:44 compute-0 podman[105253]: 2025-12-05 09:50:44.114521651 +0000 UTC m=+0.440380124 container remove 48444998c691d2133f2f212a5277d45d0186862d11a39bcac8b245227f81faae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:50:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:44 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:44 compute-0 systemd[1]: libpod-conmon-48444998c691d2133f2f212a5277d45d0186862d11a39bcac8b245227f81faae.scope: Deactivated successfully.
Dec 05 09:50:44 compute-0 ceph-mon[74418]: pgmap v22: 353 pgs: 1 unknown, 1 remapped+peering, 1 peering, 350 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:44 compute-0 podman[105299]: 2025-12-05 09:50:44.282887378 +0000 UTC m=+0.043730776 container create 17121aee69190f91d21478582a2c85b8bd952146c92a5f7fbf0b34edf72b27b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_volhard, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:50:44 compute-0 systemd[1]: Started libpod-conmon-17121aee69190f91d21478582a2c85b8bd952146c92a5f7fbf0b34edf72b27b5.scope.
Dec 05 09:50:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a727f6676655cbee3db8c0c21c247d5cb1f88777352ab17fcea05ef072ac1723/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a727f6676655cbee3db8c0c21c247d5cb1f88777352ab17fcea05ef072ac1723/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a727f6676655cbee3db8c0c21c247d5cb1f88777352ab17fcea05ef072ac1723/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a727f6676655cbee3db8c0c21c247d5cb1f88777352ab17fcea05ef072ac1723/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:44 compute-0 podman[105299]: 2025-12-05 09:50:44.262718349 +0000 UTC m=+0.023561777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:50:44 compute-0 podman[105299]: 2025-12-05 09:50:44.365616452 +0000 UTC m=+0.126459870 container init 17121aee69190f91d21478582a2c85b8bd952146c92a5f7fbf0b34edf72b27b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 09:50:44 compute-0 podman[105299]: 2025-12-05 09:50:44.374050293 +0000 UTC m=+0.134893691 container start 17121aee69190f91d21478582a2c85b8bd952146c92a5f7fbf0b34edf72b27b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_volhard, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:50:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:44 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf88003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:44 compute-0 podman[105299]: 2025-12-05 09:50:44.713863054 +0000 UTC m=+0.474706482 container attach 17121aee69190f91d21478582a2c85b8bd952146c92a5f7fbf0b34edf72b27b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_volhard, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:50:45 compute-0 lvm[105391]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:50:45 compute-0 lvm[105391]: VG ceph_vg0 finished
Dec 05 09:50:45 compute-0 beautiful_volhard[105317]: {}
Dec 05 09:50:45 compute-0 systemd[1]: libpod-17121aee69190f91d21478582a2c85b8bd952146c92a5f7fbf0b34edf72b27b5.scope: Deactivated successfully.
Dec 05 09:50:45 compute-0 systemd[1]: libpod-17121aee69190f91d21478582a2c85b8bd952146c92a5f7fbf0b34edf72b27b5.scope: Consumed 1.080s CPU time.
Dec 05 09:50:45 compute-0 podman[105394]: 2025-12-05 09:50:45.156455155 +0000 UTC m=+0.030075337 container died 17121aee69190f91d21478582a2c85b8bd952146c92a5f7fbf0b34edf72b27b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_volhard, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 05 09:50:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:45 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a727f6676655cbee3db8c0c21c247d5cb1f88777352ab17fcea05ef072ac1723-merged.mount: Deactivated successfully.
Dec 05 09:50:45 compute-0 podman[105394]: 2025-12-05 09:50:45.202199383 +0000 UTC m=+0.075819485 container remove 17121aee69190f91d21478582a2c85b8bd952146c92a5f7fbf0b34edf72b27b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_volhard, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:50:45 compute-0 systemd[1]: libpod-conmon-17121aee69190f91d21478582a2c85b8bd952146c92a5f7fbf0b34edf72b27b5.scope: Deactivated successfully.
Dec 05 09:50:45 compute-0 sudo[105187]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:45.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 05 09:50:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:45 compute-0 sudo[105409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:50:45 compute-0 sudo[105409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:45 compute-0 sudo[105410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:50:45 compute-0 sudo[105410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:45 compute-0 sudo[105409]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:45 compute-0 sudo[105410]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v23: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 853 B/s wr, 2 op/s; 36 B/s, 0 objects/s recovering
Dec 05 09:50:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec 05 09:50:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 05 09:50:45 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Dec 05 09:50:45 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Dec 05 09:50:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:50:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 09:50:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:50:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 09:50:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:50:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:45 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 05 09:50:45 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 05 09:50:45 compute-0 sudo[105459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:45 compute-0 sudo[105459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:45 compute-0 sudo[105459]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:50:45] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 09:50:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:50:45] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 09:50:45 compute-0 sudo[105484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:45 compute-0 sudo[105484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:46 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf7c003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:46 compute-0 podman[105529]: 2025-12-05 09:50:46.0944728 +0000 UTC m=+0.044676170 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:50:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec 05 09:50:46 compute-0 podman[105529]: 2025-12-05 09:50:46.473600421 +0000 UTC m=+0.423803751 container create fd8aa14775534ad730a2b723dd839c5f524d17eecc95203d55a07ba26592139a (image=quay.io/ceph/ceph:v19, name=heuristic_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 09:50:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:46 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:46 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:46 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:46 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:46 compute-0 ceph-mon[74418]: pgmap v23: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 853 B/s wr, 2 op/s; 36 B/s, 0 objects/s recovering
Dec 05 09:50:46 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 05 09:50:46 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 09:50:46 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 09:50:46 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:46 compute-0 systemd[1]: Started libpod-conmon-fd8aa14775534ad730a2b723dd839c5f524d17eecc95203d55a07ba26592139a.scope.
Dec 05 09:50:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:46 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 05 09:50:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec 05 09:50:47 compute-0 podman[105529]: 2025-12-05 09:50:47.049643244 +0000 UTC m=+0.999846624 container init fd8aa14775534ad730a2b723dd839c5f524d17eecc95203d55a07ba26592139a (image=quay.io/ceph/ceph:v19, name=heuristic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 09:50:47 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec 05 09:50:47 compute-0 podman[105529]: 2025-12-05 09:50:47.059528633 +0000 UTC m=+1.009731973 container start fd8aa14775534ad730a2b723dd839c5f524d17eecc95203d55a07ba26592139a (image=quay.io/ceph/ceph:v19, name=heuristic_mclean, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 09:50:47 compute-0 heuristic_mclean[105551]: 167 167
Dec 05 09:50:47 compute-0 podman[105529]: 2025-12-05 09:50:47.064012 +0000 UTC m=+1.014215330 container attach fd8aa14775534ad730a2b723dd839c5f524d17eecc95203d55a07ba26592139a (image=quay.io/ceph/ceph:v19, name=heuristic_mclean, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 09:50:47 compute-0 systemd[1]: libpod-fd8aa14775534ad730a2b723dd839c5f524d17eecc95203d55a07ba26592139a.scope: Deactivated successfully.
Dec 05 09:50:47 compute-0 podman[105529]: 2025-12-05 09:50:47.065639123 +0000 UTC m=+1.015842453 container died fd8aa14775534ad730a2b723dd839c5f524d17eecc95203d55a07ba26592139a (image=quay.io/ceph/ceph:v19, name=heuristic_mclean, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 09:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d1fe815f1dc47238b0e823a44b0962f3bf013921fde01a4eddebb5255b3e06a-merged.mount: Deactivated successfully.
Dec 05 09:50:47 compute-0 podman[105529]: 2025-12-05 09:50:47.115352264 +0000 UTC m=+1.065555594 container remove fd8aa14775534ad730a2b723dd839c5f524d17eecc95203d55a07ba26592139a (image=quay.io/ceph/ceph:v19, name=heuristic_mclean, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 09:50:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:50:47 compute-0 systemd[1]: libpod-conmon-fd8aa14775534ad730a2b723dd839c5f524d17eecc95203d55a07ba26592139a.scope: Deactivated successfully.
Dec 05 09:50:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:47 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c0013d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:47.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v25: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 796 B/s wr, 2 op/s; 34 B/s, 0 objects/s recovering
Dec 05 09:50:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec 05 09:50:47 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 05 09:50:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec 05 09:50:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:47.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 05 09:50:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70001070 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec 05 09:50:48 compute-0 ceph-mon[74418]: Reconfiguring mon.compute-0 (monmap changed)...
Dec 05 09:50:48 compute-0 ceph-mon[74418]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 05 09:50:48 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 05 09:50:48 compute-0 ceph-mon[74418]: osdmap e123: 3 total, 3 up, 3 in
Dec 05 09:50:48 compute-0 ceph-mon[74418]: pgmap v25: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 796 B/s wr, 2 op/s; 34 B/s, 0 objects/s recovering
Dec 05 09:50:48 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 05 09:50:48 compute-0 sudo[105484]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 05 09:50:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec 05 09:50:48 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec 05 09:50:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:48 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.hvnxai (monmap changed)...
Dec 05 09:50:48 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.hvnxai (monmap changed)...
Dec 05 09:50:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hvnxai", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:50:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hvnxai", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 09:50:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:50:48 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 09:50:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:50:48 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:48 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.hvnxai on compute-0
Dec 05 09:50:48 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.hvnxai on compute-0
Dec 05 09:50:48 compute-0 sudo[105569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:48 compute-0 sudo[105569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:48 compute-0 sudo[105569]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:48 compute-0 sudo[105594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:48 compute-0 sudo[105594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:48 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:48 compute-0 podman[105635]: 2025-12-05 09:50:48.761334973 +0000 UTC m=+0.028633950 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 05 09:50:49 compute-0 podman[105635]: 2025-12-05 09:50:49.115729666 +0000 UTC m=+0.383028623 container create a9ed650c43da627b1a5549c8bf51c774d1d9488fe90bafa4b8ccbaefe16f506a (image=quay.io/ceph/ceph:v19, name=festive_moser, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 05 09:50:49 compute-0 systemd[1]: Started libpod-conmon-a9ed650c43da627b1a5549c8bf51c774d1d9488fe90bafa4b8ccbaefe16f506a.scope.
Dec 05 09:50:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095049 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:50:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:49 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c0013d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:49 compute-0 podman[105635]: 2025-12-05 09:50:49.207266941 +0000 UTC m=+0.474565938 container init a9ed650c43da627b1a5549c8bf51c774d1d9488fe90bafa4b8ccbaefe16f506a (image=quay.io/ceph/ceph:v19, name=festive_moser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:50:49 compute-0 podman[105635]: 2025-12-05 09:50:49.217446858 +0000 UTC m=+0.484745825 container start a9ed650c43da627b1a5549c8bf51c774d1d9488fe90bafa4b8ccbaefe16f506a (image=quay.io/ceph/ceph:v19, name=festive_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:50:49 compute-0 festive_moser[105651]: 167 167
Dec 05 09:50:49 compute-0 systemd[1]: libpod-a9ed650c43da627b1a5549c8bf51c774d1d9488fe90bafa4b8ccbaefe16f506a.scope: Deactivated successfully.
Dec 05 09:50:49 compute-0 podman[105635]: 2025-12-05 09:50:49.223051814 +0000 UTC m=+0.490350811 container attach a9ed650c43da627b1a5549c8bf51c774d1d9488fe90bafa4b8ccbaefe16f506a (image=quay.io/ceph/ceph:v19, name=festive_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:50:49 compute-0 podman[105635]: 2025-12-05 09:50:49.224203394 +0000 UTC m=+0.491502341 container died a9ed650c43da627b1a5549c8bf51c774d1d9488fe90bafa4b8ccbaefe16f506a (image=quay.io/ceph/ceph:v19, name=festive_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 09:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6065a74bca77c595973612cba3cf55928541d6e4a4c5773db90e120a54093a81-merged.mount: Deactivated successfully.
Dec 05 09:50:49 compute-0 podman[105635]: 2025-12-05 09:50:49.263487373 +0000 UTC m=+0.530786330 container remove a9ed650c43da627b1a5549c8bf51c774d1d9488fe90bafa4b8ccbaefe16f506a (image=quay.io/ceph/ceph:v19, name=festive_moser, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:50:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:49.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 05 09:50:49 compute-0 ceph-mon[74418]: osdmap e124: 3 total, 3 up, 3 in
Dec 05 09:50:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:49 compute-0 ceph-mon[74418]: Reconfiguring mgr.compute-0.hvnxai (monmap changed)...
Dec 05 09:50:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hvnxai", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 09:50:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 09:50:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:49 compute-0 ceph-mon[74418]: Reconfiguring daemon mgr.compute-0.hvnxai on compute-0
Dec 05 09:50:49 compute-0 systemd[1]: libpod-conmon-a9ed650c43da627b1a5549c8bf51c774d1d9488fe90bafa4b8ccbaefe16f506a.scope: Deactivated successfully.
Dec 05 09:50:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec 05 09:50:49 compute-0 sudo[105594]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec 05 09:50:49 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec 05 09:50:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:49 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Dec 05 09:50:49 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Dec 05 09:50:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:50:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 09:50:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:50:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:49 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Dec 05 09:50:49 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Dec 05 09:50:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v28: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 852 B/s wr, 3 op/s; 36 B/s, 0 objects/s recovering
Dec 05 09:50:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec 05 09:50:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 05 09:50:49 compute-0 sudo[105668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:49 compute-0 sudo[105668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:49 compute-0 sudo[105668]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:49 compute-0 sudo[105693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:49 compute-0 sudo[105693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec 05 09:50:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:49.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 05 09:50:49 compute-0 podman[105732]: 2025-12-05 09:50:49.795311048 +0000 UTC m=+0.026767260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:50:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:50 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70001070 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:50 compute-0 podman[105732]: 2025-12-05 09:50:50.332181497 +0000 UTC m=+0.563637699 container create b7731b2d0180353e904383c3947baeadff62faba0388eba39cd60ab50e6fb062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_williamson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 05 09:50:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:50 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec 05 09:50:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:51 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:51 compute-0 systemd[1]: Started libpod-conmon-b7731b2d0180353e904383c3947baeadff62faba0388eba39cd60ab50e6fb062.scope.
Dec 05 09:50:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec 05 09:50:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:51.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 05 09:50:51 compute-0 ceph-mon[74418]: osdmap e125: 3 total, 3 up, 3 in
Dec 05 09:50:51 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:51 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:51 compute-0 ceph-mon[74418]: Reconfiguring crash.compute-0 (monmap changed)...
Dec 05 09:50:51 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 09:50:51 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:51 compute-0 ceph-mon[74418]: Reconfiguring daemon crash.compute-0 on compute-0
Dec 05 09:50:51 compute-0 ceph-mon[74418]: pgmap v28: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 852 B/s wr, 3 op/s; 36 B/s, 0 objects/s recovering
Dec 05 09:50:51 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 05 09:50:51 compute-0 podman[105732]: 2025-12-05 09:50:51.352359431 +0000 UTC m=+1.583815643 container init b7731b2d0180353e904383c3947baeadff62faba0388eba39cd60ab50e6fb062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:50:51 compute-0 podman[105732]: 2025-12-05 09:50:51.360138595 +0000 UTC m=+1.591594787 container start b7731b2d0180353e904383c3947baeadff62faba0388eba39cd60ab50e6fb062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:50:51 compute-0 jovial_williamson[105750]: 167 167
Dec 05 09:50:51 compute-0 systemd[1]: libpod-b7731b2d0180353e904383c3947baeadff62faba0388eba39cd60ab50e6fb062.scope: Deactivated successfully.
Dec 05 09:50:51 compute-0 podman[105732]: 2025-12-05 09:50:51.388655991 +0000 UTC m=+1.620112183 container attach b7731b2d0180353e904383c3947baeadff62faba0388eba39cd60ab50e6fb062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 05 09:50:51 compute-0 podman[105732]: 2025-12-05 09:50:51.390972792 +0000 UTC m=+1.622428984 container died b7731b2d0180353e904383c3947baeadff62faba0388eba39cd60ab50e6fb062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 05 09:50:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 05 09:50:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec 05 09:50:51 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec 05 09:50:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 235 B/s rd, 0 B/s wr, 0 op/s
Dec 05 09:50:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 05 09:50:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:50:51 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 126 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=74/74 les/c/f=75/75/0 sis=126) [1] r=0 lpr=126 pi=[74,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:50:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1d80dadae68bf18aa233c65e3b0c50663245e55650fb04759b47250fde07de4-merged.mount: Deactivated successfully.
Dec 05 09:50:51 compute-0 podman[105732]: 2025-12-05 09:50:51.438221418 +0000 UTC m=+1.669677610 container remove b7731b2d0180353e904383c3947baeadff62faba0388eba39cd60ab50e6fb062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_williamson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 09:50:51 compute-0 systemd[1]: libpod-conmon-b7731b2d0180353e904383c3947baeadff62faba0388eba39cd60ab50e6fb062.scope: Deactivated successfully.
Dec 05 09:50:51 compute-0 sudo[105693]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:51 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec 05 09:50:51 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec 05 09:50:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 05 09:50:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 05 09:50:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:50:51 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:51 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Dec 05 09:50:51 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Dec 05 09:50:51 compute-0 sudo[105768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:51 compute-0 sudo[105768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:51 compute-0 sudo[105768]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:51 compute-0 sudo[105793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:51 compute-0 sudo[105793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:51.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:52 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c0013d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:50:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec 05 09:50:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:50:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec 05 09:50:52 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec 05 09:50:52 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 127 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=94/94 les/c/f=95/95/0 sis=127) [1] r=0 lpr=127 pi=[94,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:50:52 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 127 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=74/74 les/c/f=75/75/0 sis=127) [1]/[0] r=-1 lpr=127 pi=[74,127)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:52 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 127 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=74/74 les/c/f=75/75/0 sis=127) [1]/[0] r=-1 lpr=127 pi=[74,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:50:52 compute-0 podman[105835]: 2025-12-05 09:50:52.257054184 +0000 UTC m=+0.127729982 container create fb4eb0886dc5ca88a41c9e35971c25ec8b0c27af00485cb3c34d3a6beb5b0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 09:50:52 compute-0 systemd[1]: Started libpod-conmon-fb4eb0886dc5ca88a41c9e35971c25ec8b0c27af00485cb3c34d3a6beb5b0fb9.scope.
Dec 05 09:50:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:52 compute-0 podman[105835]: 2025-12-05 09:50:52.235172861 +0000 UTC m=+0.105848709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:50:52 compute-0 podman[105835]: 2025-12-05 09:50:52.330469515 +0000 UTC m=+0.201145363 container init fb4eb0886dc5ca88a41c9e35971c25ec8b0c27af00485cb3c34d3a6beb5b0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:50:52 compute-0 podman[105835]: 2025-12-05 09:50:52.340517328 +0000 UTC m=+0.211193126 container start fb4eb0886dc5ca88a41c9e35971c25ec8b0c27af00485cb3c34d3a6beb5b0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 09:50:52 compute-0 podman[105835]: 2025-12-05 09:50:52.343614339 +0000 UTC m=+0.214290187 container attach fb4eb0886dc5ca88a41c9e35971c25ec8b0c27af00485cb3c34d3a6beb5b0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:50:52 compute-0 trusting_elion[105853]: 167 167
Dec 05 09:50:52 compute-0 systemd[1]: libpod-fb4eb0886dc5ca88a41c9e35971c25ec8b0c27af00485cb3c34d3a6beb5b0fb9.scope: Deactivated successfully.
Dec 05 09:50:52 compute-0 podman[105835]: 2025-12-05 09:50:52.344657966 +0000 UTC m=+0.215333764 container died fb4eb0886dc5ca88a41c9e35971c25ec8b0c27af00485cb3c34d3a6beb5b0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 09:50:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-436f7c6bceb098cf3e16da7ef6a28c89c7aaa73c6355a0ce7ccc98d75252f49d-merged.mount: Deactivated successfully.
Dec 05 09:50:52 compute-0 podman[105835]: 2025-12-05 09:50:52.428989993 +0000 UTC m=+0.299665831 container remove fb4eb0886dc5ca88a41c9e35971c25ec8b0c27af00485cb3c34d3a6beb5b0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 09:50:52 compute-0 systemd[1]: libpod-conmon-fb4eb0886dc5ca88a41c9e35971c25ec8b0c27af00485cb3c34d3a6beb5b0fb9.scope: Deactivated successfully.
Dec 05 09:50:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 05 09:50:52 compute-0 ceph-mon[74418]: osdmap e126: 3 total, 3 up, 3 in
Dec 05 09:50:52 compute-0 ceph-mon[74418]: pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 235 B/s rd, 0 B/s wr, 0 op/s
Dec 05 09:50:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 09:50:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 05 09:50:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 09:50:52 compute-0 ceph-mon[74418]: osdmap e127: 3 total, 3 up, 3 in
Dec 05 09:50:52 compute-0 sudo[105793]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:52 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec 05 09:50:52 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec 05 09:50:52 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec 05 09:50:52 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec 05 09:50:52 compute-0 sudo[105878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:52 compute-0 sudo[105878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:52 compute-0 sudo[105878]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:52 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70001f70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:52 compute-0 sudo[105903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:52 compute-0 sudo[105903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec 05 09:50:53 compute-0 systemd[1]: Stopping Ceph node-exporter.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:50:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec 05 09:50:53 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec 05 09:50:53 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 128 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=94/94 les/c/f=95/95/0 sis=128) [1]/[0] r=-1 lpr=128 pi=[94,128)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:53 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 128 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=94/94 les/c/f=95/95/0 sis=128) [1]/[0] r=-1 lpr=128 pi=[94,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 09:50:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:53 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:53.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:53 compute-0 podman[105974]: 2025-12-05 09:50:53.413728361 +0000 UTC m=+0.057225998 container died dc2521f476ac6cd8b02d9a95c2d20034aa296ae30c8ddb7ef7e3087931bef2ec (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-29a9403cd47d44ce6c08dcf5c8aed76515d84e65b9c33beda1723d39d02c97d0-merged.mount: Deactivated successfully.
Dec 05 09:50:53 compute-0 ceph-mon[74418]: Reconfiguring osd.1 (monmap changed)...
Dec 05 09:50:53 compute-0 ceph-mon[74418]: Reconfiguring daemon osd.1 on compute-0
Dec 05 09:50:53 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:53 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:53 compute-0 ceph-mon[74418]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec 05 09:50:53 compute-0 ceph-mon[74418]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec 05 09:50:53 compute-0 ceph-mon[74418]: osdmap e128: 3 total, 3 up, 3 in
Dec 05 09:50:53 compute-0 ceph-mon[74418]: pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:50:53 compute-0 podman[105974]: 2025-12-05 09:50:53.4748413 +0000 UTC m=+0.118338917 container remove dc2521f476ac6cd8b02d9a95c2d20034aa296ae30c8ddb7ef7e3087931bef2ec (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:53 compute-0 bash[105974]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0
Dec 05 09:50:53 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Dec 05 09:50:53 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@node-exporter.compute-0.service: Failed with result 'exit-code'.
Dec 05 09:50:53 compute-0 systemd[1]: Stopped Ceph node-exporter.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:50:53 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@node-exporter.compute-0.service: Consumed 2.466s CPU time.
Dec 05 09:50:53 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:50:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:53.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:53 compute-0 podman[106079]: 2025-12-05 09:50:53.908349834 +0000 UTC m=+0.054069337 container create 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:53 compute-0 podman[106079]: 2025-12-05 09:50:53.87995228 +0000 UTC m=+0.025671823 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec 05 09:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8659d606da2ce619c7571f960c1b3a905c45579b73bf050594576402c3a4ed4f/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:53 compute-0 podman[106079]: 2025-12-05 09:50:53.990167205 +0000 UTC m=+0.135886758 container init 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:54 compute-0 podman[106079]: 2025-12-05 09:50:53.999987851 +0000 UTC m=+0.145707354 container start 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:54 compute-0 bash[106079]: 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.005Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.005Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.006Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.006Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.006Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.006Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=arp
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=bcache
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=bonding
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=cpu
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=dmi
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=edac
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=entropy
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=filefd
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=hwmon
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=netclass
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=netdev
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=netstat
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=nfs
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=nvme
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=os
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=pressure
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=rapl
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=selinux
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.008Z caller=node_exporter.go:117 level=info collector=softnet
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.009Z caller=node_exporter.go:117 level=info collector=stat
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.009Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.009Z caller=node_exporter.go:117 level=info collector=textfile
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.009Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.009Z caller=node_exporter.go:117 level=info collector=time
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.009Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.009Z caller=node_exporter.go:117 level=info collector=uname
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.009Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.009Z caller=node_exporter.go:117 level=info collector=xfs
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.009Z caller=node_exporter.go:117 level=info collector=zfs
Dec 05 09:50:54 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.011Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0[106095]: ts=2025-12-05T09:50:54.011Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 05 09:50:54 compute-0 sudo[105903]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:54 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 05 09:50:54 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 05 09:50:54 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 05 09:50:54 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:54 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec 05 09:50:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec 05 09:50:54 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec 05 09:50:54 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 129 pg[9.1e( v 52'1029 (0'0,52'1029] local-lis/les=0/0 n=5 ec=55/40 lis/c=127/74 les/c/f=128/75/0 sis=129) [1] r=0 lpr=129 pi=[74,129)/1 luod=0'0 crt=52'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:54 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 129 pg[9.1e( v 52'1029 (0'0,52'1029] local-lis/les=0/0 n=5 ec=55/40 lis/c=127/74 les/c/f=128/75/0 sis=129) [1] r=0 lpr=129 pi=[74,129)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:50:54 compute-0 sudo[106105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:54 compute-0 sudo[106105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:54 compute-0 sudo[106105]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:54 compute-0 sudo[106130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:54 compute-0 sudo[106130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:54 compute-0 podman[106172]: 2025-12-05 09:50:54.645820551 +0000 UTC m=+0.052094035 volume create eb431cd3780156b4981ccbac4ff03af2799555ea515e36521cba419995187160
Dec 05 09:50:54 compute-0 podman[106172]: 2025-12-05 09:50:54.657146527 +0000 UTC m=+0.063420021 container create 751f1a3f3b8f8529c7dd2b7667fa5e49d3f048b02227e9bc055dfdd583bdd251 (image=quay.io/prometheus/alertmanager:v0.25.0, name=kind_newton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:54 compute-0 systemd[1]: Started libpod-conmon-751f1a3f3b8f8529c7dd2b7667fa5e49d3f048b02227e9bc055dfdd583bdd251.scope.
Dec 05 09:50:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:54 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c0013d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:54 compute-0 podman[106172]: 2025-12-05 09:50:54.631366793 +0000 UTC m=+0.037640297 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 05 09:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0acf2437bf57e5bba012dc2f920327d518f50d56f88bada8c5543b46acb8109a/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:54 compute-0 podman[106172]: 2025-12-05 09:50:54.752594385 +0000 UTC m=+0.158867969 container init 751f1a3f3b8f8529c7dd2b7667fa5e49d3f048b02227e9bc055dfdd583bdd251 (image=quay.io/prometheus/alertmanager:v0.25.0, name=kind_newton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:54 compute-0 podman[106172]: 2025-12-05 09:50:54.759129375 +0000 UTC m=+0.165402859 container start 751f1a3f3b8f8529c7dd2b7667fa5e49d3f048b02227e9bc055dfdd583bdd251 (image=quay.io/prometheus/alertmanager:v0.25.0, name=kind_newton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:54 compute-0 kind_newton[106189]: 65534 65534
Dec 05 09:50:54 compute-0 systemd[1]: libpod-751f1a3f3b8f8529c7dd2b7667fa5e49d3f048b02227e9bc055dfdd583bdd251.scope: Deactivated successfully.
Dec 05 09:50:54 compute-0 podman[106172]: 2025-12-05 09:50:54.764493146 +0000 UTC m=+0.170766670 container attach 751f1a3f3b8f8529c7dd2b7667fa5e49d3f048b02227e9bc055dfdd583bdd251 (image=quay.io/prometheus/alertmanager:v0.25.0, name=kind_newton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:54 compute-0 podman[106172]: 2025-12-05 09:50:54.764899416 +0000 UTC m=+0.171172920 container died 751f1a3f3b8f8529c7dd2b7667fa5e49d3f048b02227e9bc055dfdd583bdd251 (image=quay.io/prometheus/alertmanager:v0.25.0, name=kind_newton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0acf2437bf57e5bba012dc2f920327d518f50d56f88bada8c5543b46acb8109a-merged.mount: Deactivated successfully.
Dec 05 09:50:54 compute-0 podman[106172]: 2025-12-05 09:50:54.804476992 +0000 UTC m=+0.210750476 container remove 751f1a3f3b8f8529c7dd2b7667fa5e49d3f048b02227e9bc055dfdd583bdd251 (image=quay.io/prometheus/alertmanager:v0.25.0, name=kind_newton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:54 compute-0 podman[106172]: 2025-12-05 09:50:54.808518428 +0000 UTC m=+0.214791982 volume remove eb431cd3780156b4981ccbac4ff03af2799555ea515e36521cba419995187160
Dec 05 09:50:54 compute-0 systemd[1]: libpod-conmon-751f1a3f3b8f8529c7dd2b7667fa5e49d3f048b02227e9bc055dfdd583bdd251.scope: Deactivated successfully.
Dec 05 09:50:54 compute-0 podman[106205]: 2025-12-05 09:50:54.871324411 +0000 UTC m=+0.043543930 volume create 675dee696d57243d7f13d3155d286947b97fdbe447130006e086f0ef855549d0
Dec 05 09:50:54 compute-0 podman[106205]: 2025-12-05 09:50:54.889699252 +0000 UTC m=+0.061918791 container create 129ea7ee09b7ff7697ccdd554ae42adedc085e2b51c9ee3111d1e3a8a8538463 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hungry_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:54 compute-0 systemd[1]: Started libpod-conmon-129ea7ee09b7ff7697ccdd554ae42adedc085e2b51c9ee3111d1e3a8a8538463.scope.
Dec 05 09:50:54 compute-0 podman[106205]: 2025-12-05 09:50:54.852369696 +0000 UTC m=+0.024589275 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 05 09:50:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ef49331f54920254f6d2c2c355e3e878068baa0a39e23929679bbfe2fa2c60/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:55 compute-0 podman[106205]: 2025-12-05 09:50:55.009077006 +0000 UTC m=+0.181296555 container init 129ea7ee09b7ff7697ccdd554ae42adedc085e2b51c9ee3111d1e3a8a8538463 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hungry_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:55 compute-0 podman[106205]: 2025-12-05 09:50:55.020475685 +0000 UTC m=+0.192695204 container start 129ea7ee09b7ff7697ccdd554ae42adedc085e2b51c9ee3111d1e3a8a8538463 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hungry_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:55 compute-0 hungry_pasteur[106224]: 65534 65534
Dec 05 09:50:55 compute-0 systemd[1]: libpod-129ea7ee09b7ff7697ccdd554ae42adedc085e2b51c9ee3111d1e3a8a8538463.scope: Deactivated successfully.
Dec 05 09:50:55 compute-0 conmon[106224]: conmon 129ea7ee09b7ff7697cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-129ea7ee09b7ff7697ccdd554ae42adedc085e2b51c9ee3111d1e3a8a8538463.scope/container/memory.events
Dec 05 09:50:55 compute-0 podman[106205]: 2025-12-05 09:50:55.02678711 +0000 UTC m=+0.199006679 container attach 129ea7ee09b7ff7697ccdd554ae42adedc085e2b51c9ee3111d1e3a8a8538463 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hungry_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:55 compute-0 podman[106205]: 2025-12-05 09:50:55.027893648 +0000 UTC m=+0.200113207 container died 129ea7ee09b7ff7697ccdd554ae42adedc085e2b51c9ee3111d1e3a8a8538463 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hungry_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-03ef49331f54920254f6d2c2c355e3e878068baa0a39e23929679bbfe2fa2c60-merged.mount: Deactivated successfully.
Dec 05 09:50:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:55 compute-0 ceph-mon[74418]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 05 09:50:55 compute-0 ceph-mon[74418]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 05 09:50:55 compute-0 ceph-mon[74418]: osdmap e129: 3 total, 3 up, 3 in
Dec 05 09:50:55 compute-0 podman[106205]: 2025-12-05 09:50:55.109528564 +0000 UTC m=+0.281748083 container remove 129ea7ee09b7ff7697ccdd554ae42adedc085e2b51c9ee3111d1e3a8a8538463 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hungry_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:55 compute-0 podman[106205]: 2025-12-05 09:50:55.115222383 +0000 UTC m=+0.287441912 volume remove 675dee696d57243d7f13d3155d286947b97fdbe447130006e086f0ef855549d0
Dec 05 09:50:55 compute-0 systemd[1]: libpod-conmon-129ea7ee09b7ff7697ccdd554ae42adedc085e2b51c9ee3111d1e3a8a8538463.scope: Deactivated successfully.
Dec 05 09:50:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec 05 09:50:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec 05 09:50:55 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:50:55 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec 05 09:50:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:55 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70001f70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:55 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 130 pg[9.1f( v 52'1029 (0'0,52'1029] local-lis/les=0/0 n=5 ec=55/40 lis/c=128/94 les/c/f=129/95/0 sis=130) [1] r=0 lpr=130 pi=[94,130)/1 luod=0'0 crt=52'1029 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 05 09:50:55 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 130 pg[9.1f( v 52'1029 (0'0,52'1029] local-lis/les=0/0 n=5 ec=55/40 lis/c=128/94 les/c/f=129/95/0 sis=130) [1] r=0 lpr=130 pi=[94,130)/1 crt=52'1029 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 09:50:55 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 130 pg[9.1e( v 52'1029 (0'0,52'1029] local-lis/les=129/130 n=5 ec=55/40 lis/c=127/74 les/c/f=128/75/0 sis=129) [1] r=0 lpr=129 pi=[74,129)/1 crt=52'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:50:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:55.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[98670]: ts=2025-12-05T09:50:55.337Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Dec 05 09:50:55 compute-0 podman[106272]: 2025-12-05 09:50:55.348092167 +0000 UTC m=+0.049545597 container died aa11c6973d139c2e9bb6746f25caf931656607e7034cefb81d97cc477f867cd1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a58a425ce4bbec0686de465a07de8d6f5898fe9b8f5276797892f0af8beef35d-merged.mount: Deactivated successfully.
Dec 05 09:50:55 compute-0 podman[106272]: 2025-12-05 09:50:55.388791422 +0000 UTC m=+0.090244852 container remove aa11c6973d139c2e9bb6746f25caf931656607e7034cefb81d97cc477f867cd1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:55 compute-0 podman[106272]: 2025-12-05 09:50:55.392408496 +0000 UTC m=+0.093861946 volume remove 3ce96c36949a79e265f28d2e4dd682f09944630357c074912d3a86f2ec1e3f05
Dec 05 09:50:55 compute-0 bash[106272]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0
Dec 05 09:50:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v36: 353 pgs: 1 remapped+peering, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Dec 05 09:50:55 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@alertmanager.compute-0.service: Deactivated successfully.
Dec 05 09:50:55 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:50:55 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@alertmanager.compute-0.service: Consumed 1.443s CPU time.
Dec 05 09:50:55 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:50:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:50:55] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 09:50:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:50:55] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 09:50:55 compute-0 podman[106375]: 2025-12-05 09:50:55.742553108 +0000 UTC m=+0.036769693 volume create e786ec464b5c17a82cbe94e1d30c38028445f48e8d10c4f92c367169a6596a4b
Dec 05 09:50:55 compute-0 podman[106375]: 2025-12-05 09:50:55.750535967 +0000 UTC m=+0.044752552 container create a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f730aabd552f2eaa29843c3940ca7ba8d55fe59a8682dffad24ad4ae1b1ad61/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f730aabd552f2eaa29843c3940ca7ba8d55fe59a8682dffad24ad4ae1b1ad61/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:55 compute-0 podman[106375]: 2025-12-05 09:50:55.806168493 +0000 UTC m=+0.100385098 container init a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:55 compute-0 podman[106375]: 2025-12-05 09:50:55.811305068 +0000 UTC m=+0.105521653 container start a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:50:55 compute-0 bash[106375]: a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101
Dec 05 09:50:55 compute-0 podman[106375]: 2025-12-05 09:50:55.729549529 +0000 UTC m=+0.023766134 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 05 09:50:55 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:50:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:50:55.842Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec 05 09:50:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:50:55.843Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec 05 09:50:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:55.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:50:55.851Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec 05 09:50:55 compute-0 sudo[106130]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:50:55.861Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec 05 09:50:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:55 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 05 09:50:55 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 05 09:50:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:50:55.900Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 05 09:50:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:50:55.901Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 05 09:50:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:50:55.906Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec 05 09:50:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:50:55.906Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec 05 09:50:55 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Dec 05 09:50:55 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Dec 05 09:50:56 compute-0 sudo[106413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:56 compute-0 sudo[106413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:56 compute-0 sudo[106413]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:56 compute-0 sudo[106439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 09:50:56 compute-0 sudo[106439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:50:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:56 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec 05 09:50:56 compute-0 ceph-mon[74418]: osdmap e130: 3 total, 3 up, 3 in
Dec 05 09:50:56 compute-0 ceph-mon[74418]: pgmap v36: 353 pgs: 1 remapped+peering, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Dec 05 09:50:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec 05 09:50:56 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec 05 09:50:56 compute-0 ceph-osd[82677]: osd.1 pg_epoch: 131 pg[9.1f( v 52'1029 (0'0,52'1029] local-lis/les=130/131 n=5 ec=55/40 lis/c=128/94 les/c/f=129/95/0 sis=130) [1] r=0 lpr=130 pi=[94,130)/1 crt=52'1029 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 09:50:56 compute-0 podman[106481]: 2025-12-05 09:50:56.589223944 +0000 UTC m=+0.032838161 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 05 09:50:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:56 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:57 compute-0 podman[106481]: 2025-12-05 09:50:57.02017768 +0000 UTC m=+0.463791877 container create 5154729b701ff030741e65aaac7e3d39d6b47d70cbb3f5083dc45ab857a34162 (image=quay.io/ceph/grafana:10.4.0, name=competent_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 systemd[1]: Started libpod-conmon-5154729b701ff030741e65aaac7e3d39d6b47d70cbb3f5083dc45ab857a34162.scope.
Dec 05 09:50:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:57 compute-0 podman[106481]: 2025-12-05 09:50:57.117176948 +0000 UTC m=+0.560791165 container init 5154729b701ff030741e65aaac7e3d39d6b47d70cbb3f5083dc45ab857a34162 (image=quay.io/ceph/grafana:10.4.0, name=competent_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 podman[106481]: 2025-12-05 09:50:57.125430974 +0000 UTC m=+0.569045171 container start 5154729b701ff030741e65aaac7e3d39d6b47d70cbb3f5083dc45ab857a34162 (image=quay.io/ceph/grafana:10.4.0, name=competent_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:50:57 compute-0 podman[106481]: 2025-12-05 09:50:57.131506203 +0000 UTC m=+0.575120420 container attach 5154729b701ff030741e65aaac7e3d39d6b47d70cbb3f5083dc45ab857a34162 (image=quay.io/ceph/grafana:10.4.0, name=competent_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 competent_mirzakhani[106500]: 472 0
Dec 05 09:50:57 compute-0 systemd[1]: libpod-5154729b701ff030741e65aaac7e3d39d6b47d70cbb3f5083dc45ab857a34162.scope: Deactivated successfully.
Dec 05 09:50:57 compute-0 podman[106481]: 2025-12-05 09:50:57.134367928 +0000 UTC m=+0.577982125 container died 5154729b701ff030741e65aaac7e3d39d6b47d70cbb3f5083dc45ab857a34162 (image=quay.io/ceph/grafana:10.4.0, name=competent_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d817c39a67dae872d080d5b7c971679e0f5ea5685ec318ffd7af35f10fafd1fa-merged.mount: Deactivated successfully.
Dec 05 09:50:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:57 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c002f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:57 compute-0 podman[106481]: 2025-12-05 09:50:57.180455535 +0000 UTC m=+0.624069732 container remove 5154729b701ff030741e65aaac7e3d39d6b47d70cbb3f5083dc45ab857a34162 (image=quay.io/ceph/grafana:10.4.0, name=competent_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 systemd[1]: libpod-conmon-5154729b701ff030741e65aaac7e3d39d6b47d70cbb3f5083dc45ab857a34162.scope: Deactivated successfully.
Dec 05 09:50:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:50:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:57.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:50:57 compute-0 podman[106518]: 2025-12-05 09:50:57.225589416 +0000 UTC m=+0.026657209 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 05 09:50:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v38: 353 pgs: 1 remapped+peering, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 240 B/s rd, 0 op/s; 51 B/s, 2 objects/s recovering
Dec 05 09:50:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:50:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:50:57 compute-0 ceph-mon[74418]: Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 05 09:50:57 compute-0 ceph-mon[74418]: Reconfiguring daemon grafana.compute-0 on compute-0
Dec 05 09:50:57 compute-0 ceph-mon[74418]: osdmap e131: 3 total, 3 up, 3 in
Dec 05 09:50:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:50:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:50:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:50:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:50:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:50:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:50:57 compute-0 podman[106518]: 2025-12-05 09:50:57.644354543 +0000 UTC m=+0.445422316 container create 831b4d5b0497c049974eca559a6c0a5ab2d5a21d1096aaf2ca5fb19d64443835 (image=quay.io/ceph/grafana:10.4.0, name=fervent_shamir, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 systemd[1]: Started libpod-conmon-831b4d5b0497c049974eca559a6c0a5ab2d5a21d1096aaf2ca5fb19d64443835.scope.
Dec 05 09:50:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:50:57 compute-0 podman[106518]: 2025-12-05 09:50:57.72947 +0000 UTC m=+0.530537793 container init 831b4d5b0497c049974eca559a6c0a5ab2d5a21d1096aaf2ca5fb19d64443835 (image=quay.io/ceph/grafana:10.4.0, name=fervent_shamir, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 podman[106518]: 2025-12-05 09:50:57.737083049 +0000 UTC m=+0.538150822 container start 831b4d5b0497c049974eca559a6c0a5ab2d5a21d1096aaf2ca5fb19d64443835 (image=quay.io/ceph/grafana:10.4.0, name=fervent_shamir, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 fervent_shamir[106535]: 472 0
Dec 05 09:50:57 compute-0 systemd[1]: libpod-831b4d5b0497c049974eca559a6c0a5ab2d5a21d1096aaf2ca5fb19d64443835.scope: Deactivated successfully.
Dec 05 09:50:57 compute-0 podman[106518]: 2025-12-05 09:50:57.740328924 +0000 UTC m=+0.541396717 container attach 831b4d5b0497c049974eca559a6c0a5ab2d5a21d1096aaf2ca5fb19d64443835 (image=quay.io/ceph/grafana:10.4.0, name=fervent_shamir, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 podman[106518]: 2025-12-05 09:50:57.740879579 +0000 UTC m=+0.541947352 container died 831b4d5b0497c049974eca559a6c0a5ab2d5a21d1096aaf2ca5fb19d64443835 (image=quay.io/ceph/grafana:10.4.0, name=fervent_shamir, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8357d16eda328aea7dc52e1e250db20589cfb5ef1e17d427dc54cc6e5e78160c-merged.mount: Deactivated successfully.
Dec 05 09:50:57 compute-0 podman[106518]: 2025-12-05 09:50:57.77649449 +0000 UTC m=+0.577562303 container remove 831b4d5b0497c049974eca559a6c0a5ab2d5a21d1096aaf2ca5fb19d64443835 (image=quay.io/ceph/grafana:10.4.0, name=fervent_shamir, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:57 compute-0 systemd[1]: libpod-conmon-831b4d5b0497c049974eca559a6c0a5ab2d5a21d1096aaf2ca5fb19d64443835.scope: Deactivated successfully.
Dec 05 09:50:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:57.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:50:57.862Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000572199s
Dec 05 09:50:57 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=server t=2025-12-05T09:50:58.02718713Z level=info msg="Shutdown started" reason="System signal: terminated"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=tracing t=2025-12-05T09:50:58.027385886Z level=info msg="Closing tracing"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=ticker t=2025-12-05T09:50:58.027457067Z level=info msg=stopped last_tick=2025-12-05T09:50:50Z
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=grafana-apiserver t=2025-12-05T09:50:58.028172926Z level=info msg="StorageObjectCountTracker pruner is exiting"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[99379]: logger=sqlstore.transactions t=2025-12-05T09:50:58.039342318Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 05 09:50:58 compute-0 podman[106586]: 2025-12-05 09:50:58.078598695 +0000 UTC m=+0.084606134 container died bfc89c7b51db319a90bd517ef6d4861794d073950d7be4a9d66708be3b568f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:58 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c002f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f04e1d0fd365ee990e4838604021bd5bd072e7f44f32bbfc4b7ca69751bd7682-merged.mount: Deactivated successfully.
Dec 05 09:50:58 compute-0 podman[106586]: 2025-12-05 09:50:58.18274929 +0000 UTC m=+0.188756729 container remove bfc89c7b51db319a90bd517ef6d4861794d073950d7be4a9d66708be3b568f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:58 compute-0 bash[106586]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0
Dec 05 09:50:58 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@grafana.compute-0.service: Deactivated successfully.
Dec 05 09:50:58 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:50:58 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@grafana.compute-0.service: Consumed 5.054s CPU time.
Dec 05 09:50:58 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:50:58 compute-0 podman[106693]: 2025-12-05 09:50:58.591221159 +0000 UTC m=+0.028443305 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 05 09:50:58 compute-0 podman[106693]: 2025-12-05 09:50:58.698676462 +0000 UTC m=+0.135898608 container create 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:58 compute-0 ceph-mon[74418]: pgmap v38: 353 pgs: 1 remapped+peering, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 240 B/s rd, 0 op/s; 51 B/s, 2 objects/s recovering
Dec 05 09:50:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:58 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c002f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39048b7233ebfa43703e32d98039b0a44dc50eb7a3a6932a9a07ee80c54eef63/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39048b7233ebfa43703e32d98039b0a44dc50eb7a3a6932a9a07ee80c54eef63/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39048b7233ebfa43703e32d98039b0a44dc50eb7a3a6932a9a07ee80c54eef63/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39048b7233ebfa43703e32d98039b0a44dc50eb7a3a6932a9a07ee80c54eef63/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39048b7233ebfa43703e32d98039b0a44dc50eb7a3a6932a9a07ee80c54eef63/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec 05 09:50:58 compute-0 podman[106693]: 2025-12-05 09:50:58.768536849 +0000 UTC m=+0.205758975 container init 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:58 compute-0 podman[106693]: 2025-12-05 09:50:58.774704431 +0000 UTC m=+0.211926557 container start 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:50:58 compute-0 bash[106693]: 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260
Dec 05 09:50:58 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:50:58 compute-0 sudo[106439]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:50:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:50:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:58 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Dec 05 09:50:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Dec 05 09:50:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:50:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 09:50:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:50:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:58 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Dec 05 09:50:58 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992421267Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-05T09:50:58Z
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992722766Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992730316Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992734176Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992738346Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992742036Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992745696Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992749177Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992752947Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992756527Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992760267Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992765657Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992769367Z level=info msg=Target target=[all]
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992776177Z level=info msg="Path Home" path=/usr/share/grafana
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992779547Z level=info msg="Path Data" path=/var/lib/grafana
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992782747Z level=info msg="Path Logs" path=/var/log/grafana
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992786037Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992789398Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=settings t=2025-12-05T09:50:58.992792668Z level=info msg="App mode production"
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=sqlstore t=2025-12-05T09:50:58.993178948Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=sqlstore t=2025-12-05T09:50:58.993205898Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec 05 09:50:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=migrator t=2025-12-05T09:50:58.994222765Z level=info msg="Starting DB migrations"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=migrator t=2025-12-05T09:50:59.017328419Z level=info msg="migrations completed" performed=0 skipped=547 duration=803.941µs
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=sqlstore t=2025-12-05T09:50:59.018606513Z level=info msg="Created default organization"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=secrets t=2025-12-05T09:50:59.019342882Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=plugin.store t=2025-12-05T09:50:59.045743472Z level=info msg="Loading plugins..."
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=local.finder t=2025-12-05T09:50:59.128763725Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=plugin.store t=2025-12-05T09:50:59.128803186Z level=info msg="Plugins loaded" count=55 duration=83.060884ms
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=query_data t=2025-12-05T09:50:59.13277192Z level=info msg="Query Service initialization"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=live.push_http t=2025-12-05T09:50:59.136394035Z level=info msg="Live Push Gateway initialization"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=ngalert.migration t=2025-12-05T09:50:59.140599925Z level=info msg=Starting
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=ngalert.state.manager t=2025-12-05T09:50:59.152200018Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=infra.usagestats.collector t=2025-12-05T09:50:59.155135925Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=provisioning.datasources t=2025-12-05T09:50:59.157599859Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:50:59 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70002ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=provisioning.alerting t=2025-12-05T09:50:59.182709126Z level=info msg="starting to provision alerting"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=provisioning.alerting t=2025-12-05T09:50:59.182740617Z level=info msg="finished to provision alerting"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=ngalert.state.manager t=2025-12-05T09:50:59.183498258Z level=info msg="Warming state cache for startup"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=grafanaStorageLogger t=2025-12-05T09:50:59.184039332Z level=info msg="Storage starting"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=ngalert.multiorg.alertmanager t=2025-12-05T09:50:59.185666694Z level=info msg="Starting MultiOrg Alertmanager"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=http.server t=2025-12-05T09:50:59.186205779Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=http.server t=2025-12-05T09:50:59.186606509Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=provisioning.dashboard t=2025-12-05T09:50:59.187859781Z level=info msg="starting to provision dashboards"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=provisioning.dashboard t=2025-12-05T09:50:59.227256322Z level=info msg="finished to provision dashboards"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=ngalert.state.manager t=2025-12-05T09:50:59.235704473Z level=info msg="State cache has been initialized" states=0 duration=52.201435ms
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=ngalert.scheduler t=2025-12-05T09:50:59.235808836Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=ticker t=2025-12-05T09:50:59.235920379Z level=info msg=starting first_tick=2025-12-05T09:51:00Z
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=plugins.update.checker t=2025-12-05T09:50:59.266477178Z level=info msg="Update check succeeded" duration=82.288923ms
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=grafana.update.checker t=2025-12-05T09:50:59.27568528Z level=info msg="Update check succeeded" duration=90.179151ms
Dec 05 09:50:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:50:59.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v39: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=grafana-apiserver t=2025-12-05T09:50:59.672088528Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec 05 09:50:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=grafana-apiserver t=2025-12-05T09:50:59.672826197Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec 05 09:50:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:50:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:50:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:50:59.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:50:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:50:59 compute-0 ceph-mon[74418]: Reconfiguring crash.compute-1 (monmap changed)...
Dec 05 09:50:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 09:50:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:50:59 compute-0 ceph-mon[74418]: Reconfiguring daemon crash.compute-1 on compute-1
Dec 05 09:50:59 compute-0 ceph-mon[74418]: pgmap v39: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Dec 05 09:51:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:51:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:51:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:00 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec 05 09:51:00 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec 05 09:51:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 05 09:51:00 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 05 09:51:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:51:00 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:00 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c002f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:00 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Dec 05 09:51:00 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Dec 05 09:51:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:00 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:01 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:01 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:01 compute-0 ceph-mon[74418]: Reconfiguring osd.0 (monmap changed)...
Dec 05 09:51:01 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 05 09:51:01 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:01 compute-0 ceph-mon[74418]: Reconfiguring daemon osd.0 on compute-1
Dec 05 09:51:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:01 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:01.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:51:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:51:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v40: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 282 B/s rd, 0 op/s; 15 B/s, 0 objects/s recovering
Dec 05 09:51:01 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Dec 05 09:51:01 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Dec 05 09:51:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:51:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 09:51:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:51:01 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 09:51:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:51:01 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:01 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Dec 05 09:51:01 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Dec 05 09:51:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:01.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:02 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70002ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:02 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c002f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:02 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:02 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:02 compute-0 ceph-mon[74418]: pgmap v40: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 282 B/s rd, 0 op/s; 15 B/s, 0 objects/s recovering
Dec 05 09:51:02 compute-0 ceph-mon[74418]: Reconfiguring mon.compute-1 (monmap changed)...
Dec 05 09:51:02 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 09:51:02 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 09:51:02 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:02 compute-0 ceph-mon[74418]: Reconfiguring daemon mon.compute-1 on compute-1
Dec 05 09:51:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:51:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:51:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:03 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring rgw.rgw.compute-1.oiufcm (unknown last config time)...
Dec 05 09:51:03 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring rgw.rgw.compute-1.oiufcm (unknown last config time)...
Dec 05 09:51:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oiufcm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 05 09:51:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oiufcm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:51:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 05 09:51:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:51:03 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:03 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon rgw.rgw.compute-1.oiufcm on compute-1
Dec 05 09:51:03 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon rgw.rgw.compute-1.oiufcm on compute-1
Dec 05 09:51:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:03 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:03.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v41: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Dec 05 09:51:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:03.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:04 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:04 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:04 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:04 compute-0 ceph-mon[74418]: Reconfiguring rgw.rgw.compute-1.oiufcm (unknown last config time)...
Dec 05 09:51:04 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oiufcm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 09:51:04 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:04 compute-0 ceph-mon[74418]: Reconfiguring daemon rgw.rgw.compute-1.oiufcm on compute-1
Dec 05 09:51:04 compute-0 ceph-mon[74418]: pgmap v41: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Dec 05 09:51:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:51:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:51:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:04 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70003bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:04 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Dec 05 09:51:04 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Dec 05 09:51:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:51:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 09:51:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:51:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 09:51:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:51:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:04 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Dec 05 09:51:04 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Dec 05 09:51:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:05 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c002f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:05.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v42: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Dec 05 09:51:05 compute-0 sudo[106771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:51:05 compute-0 sudo[106771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:05 compute-0 sudo[106771]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:05] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 09:51:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:05] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 09:51:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:05 compute-0 ceph-mon[74418]: Reconfiguring mon.compute-2 (monmap changed)...
Dec 05 09:51:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 09:51:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 09:51:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:05 compute-0 ceph-mon[74418]: Reconfiguring daemon mon.compute-2 on compute-2
Dec 05 09:51:05 compute-0 ceph-mon[74418]: pgmap v42: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Dec 05 09:51:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:51:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:51:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:05 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.wewrgp (monmap changed)...
Dec 05 09:51:05 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.wewrgp (monmap changed)...
Dec 05 09:51:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.wewrgp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:51:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.wewrgp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 09:51:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:51:05 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 09:51:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:51:05 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:05 compute-0 ceph-mgr[74711]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.wewrgp on compute-2
Dec 05 09:51:05 compute-0 ceph-mgr[74711]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.wewrgp on compute-2
Dec 05 09:51:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:51:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:05.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:51:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:51:05.864Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003042111s
Dec 05 09:51:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:06 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:51:06 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:51:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:06 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:07 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:07 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:07 compute-0 ceph-mon[74418]: Reconfiguring mgr.compute-2.wewrgp (monmap changed)...
Dec 05 09:51:07 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.wewrgp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 09:51:07 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 09:51:07 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:07 compute-0 ceph-mon[74418]: Reconfiguring daemon mgr.compute-2.wewrgp on compute-2
Dec 05 09:51:07 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: [dashboard INFO request] [192.168.122.100:57738] [POST] [200] [0.146s] [4.0B] [50acdc7c-41f5-48c9-83c9-8be370f84a23] /api/prometheus_receiver
Dec 05 09:51:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Dec 05 09:51:07 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 05 09:51:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Dec 05 09:51:07 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 05 09:51:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Dec 05 09:51:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 05 09:51:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec 05 09:51:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: [prometheus INFO root] Restarting engine...
Dec 05 09:51:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: [05/Dec/2025:09:51:07] ENGINE Bus STOPPING
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.error] [05/Dec/2025:09:51:07] ENGINE Bus STOPPING
Dec 05 09:51:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:07 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70003bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:07 compute-0 sudo[106798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:51:07 compute-0 sudo[106798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:07 compute-0 sudo[106798]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:07.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:07 compute-0 sudo[106823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 09:51:07 compute-0 sudo[106823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v43: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 273 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 05 09:51:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: [05/Dec/2025:09:51:07] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec 05 09:51:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: [05/Dec/2025:09:51:07] ENGINE Bus STOPPED
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.error] [05/Dec/2025:09:51:07] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec 05 09:51:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: [05/Dec/2025:09:51:07] ENGINE Bus STARTING
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.error] [05/Dec/2025:09:51:07] ENGINE Bus STOPPED
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.error] [05/Dec/2025:09:51:07] ENGINE Bus STARTING
Dec 05 09:51:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: [05/Dec/2025:09:51:07] ENGINE Serving on http://:::9283
Dec 05 09:51:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: [05/Dec/2025:09:51:07] ENGINE Bus STARTED
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.error] [05/Dec/2025:09:51:07] ENGINE Serving on http://:::9283
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.error] [05/Dec/2025:09:51:07] ENGINE Bus STARTED
Dec 05 09:51:07 compute-0 ceph-mgr[74711]: [prometheus INFO root] Engine started.
Dec 05 09:51:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:07.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:07 compute-0 podman[106939]: 2025-12-05 09:51:07.958034188 +0000 UTC m=+0.087996206 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 09:51:08 compute-0 podman[106939]: 2025-12-05 09:51:08.077302572 +0000 UTC m=+0.207264580 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 05 09:51:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 05 09:51:08 compute-0 ceph-mon[74418]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 05 09:51:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 05 09:51:08 compute-0 ceph-mon[74418]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 05 09:51:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 05 09:51:08 compute-0 ceph-mon[74418]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 05 09:51:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:08 compute-0 ceph-mon[74418]: pgmap v43: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 273 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 05 09:51:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:08 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c002f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:08 compute-0 podman[107061]: 2025-12-05 09:51:08.65697836 +0000 UTC m=+0.066583757 container exec 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:51:08 compute-0 podman[107061]: 2025-12-05 09:51:08.669921483 +0000 UTC m=+0.079526870 container exec_died 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:51:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:08 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:09 compute-0 podman[107153]: 2025-12-05 09:51:09.081604985 +0000 UTC m=+0.073827840 container exec d1ea233284d0d310cc076ca9ad62473a1bc421943ae196b1f9584786262f3156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 09:51:09 compute-0 podman[107153]: 2025-12-05 09:51:09.120506107 +0000 UTC m=+0.112728942 container exec_died d1ea233284d0d310cc076ca9ad62473a1bc421943ae196b1f9584786262f3156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:51:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:09 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a660 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:09.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:09 compute-0 podman[107225]: 2025-12-05 09:51:09.373905658 +0000 UTC m=+0.048417955 container exec d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 09:51:09 compute-0 podman[107225]: 2025-12-05 09:51:09.394490105 +0000 UTC m=+0.069002382 container exec_died d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 09:51:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 05 09:51:09 compute-0 ceph-mon[74418]: pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 05 09:51:09 compute-0 podman[107297]: 2025-12-05 09:51:09.589572 +0000 UTC m=+0.053160782 container exec f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, version=2.2.4, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-type=git, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc.)
Dec 05 09:51:09 compute-0 podman[107297]: 2025-12-05 09:51:09.605644026 +0000 UTC m=+0.069232808 container exec_died f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, name=keepalived, io.openshift.expose-services=, vcs-type=git, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4)
Dec 05 09:51:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:09.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:10 compute-0 podman[107364]: 2025-12-05 09:51:10.016727592 +0000 UTC m=+0.078224556 container exec a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:51:10 compute-0 podman[107364]: 2025-12-05 09:51:10.054645197 +0000 UTC m=+0.116142121 container exec_died a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:51:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:10 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70003bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:10 compute-0 podman[107440]: 2025-12-05 09:51:10.299558405 +0000 UTC m=+0.058509104 container exec 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:51:10 compute-0 podman[107440]: 2025-12-05 09:51:10.493448728 +0000 UTC m=+0.252399507 container exec_died 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:51:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:10 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70003bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:11 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:11.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v45: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:51:11 compute-0 podman[107552]: 2025-12-05 09:51:11.474288828 +0000 UTC m=+0.069839624 container exec 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:51:11 compute-0 ceph-mon[74418]: pgmap v45: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:51:11 compute-0 podman[107552]: 2025-12-05 09:51:11.549858502 +0000 UTC m=+0.145409288 container exec_died 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:51:11 compute-0 sudo[106823]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:51:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:11.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:51:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:51:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:51:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:51:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:51:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 286 B/s rd, 0 op/s
Dec 05 09:51:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:12 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c002f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:51:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:51:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:51:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:51:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:51:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:51:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:12 compute-0 sudo[107605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:51:12 compute-0 sudo[107605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:12 compute-0 sudo[107605]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:12 compute-0 sudo[107630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:51:12 compute-0 sudo[107630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:51:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:51:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:12 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a6a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:12 compute-0 podman[107695]: 2025-12-05 09:51:12.867447285 +0000 UTC m=+0.086270129 container create dcdb4d6d2076c0bfdb33f5cce59d4dc243b8874769053a63bc8eaaa781431868 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_burnell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:51:12 compute-0 podman[107695]: 2025-12-05 09:51:12.806853838 +0000 UTC m=+0.025676682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:51:12 compute-0 systemd[1]: Started libpod-conmon-dcdb4d6d2076c0bfdb33f5cce59d4dc243b8874769053a63bc8eaaa781431868.scope.
Dec 05 09:51:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:51:12 compute-0 podman[107695]: 2025-12-05 09:51:12.974373202 +0000 UTC m=+0.193196046 container init dcdb4d6d2076c0bfdb33f5cce59d4dc243b8874769053a63bc8eaaa781431868 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_burnell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:51:12 compute-0 podman[107695]: 2025-12-05 09:51:12.981311246 +0000 UTC m=+0.200134060 container start dcdb4d6d2076c0bfdb33f5cce59d4dc243b8874769053a63bc8eaaa781431868 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_burnell, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Dec 05 09:51:12 compute-0 podman[107695]: 2025-12-05 09:51:12.984489 +0000 UTC m=+0.203311824 container attach dcdb4d6d2076c0bfdb33f5cce59d4dc243b8874769053a63bc8eaaa781431868 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:51:12 compute-0 beautiful_burnell[107712]: 167 167
Dec 05 09:51:12 compute-0 systemd[1]: libpod-dcdb4d6d2076c0bfdb33f5cce59d4dc243b8874769053a63bc8eaaa781431868.scope: Deactivated successfully.
Dec 05 09:51:12 compute-0 podman[107695]: 2025-12-05 09:51:12.987738716 +0000 UTC m=+0.206561540 container died dcdb4d6d2076c0bfdb33f5cce59d4dc243b8874769053a63bc8eaaa781431868 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 09:51:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:51:13 compute-0 ceph-mon[74418]: pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 286 B/s rd, 0 op/s
Dec 05 09:51:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:51:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:51:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:51:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:51:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e26731f56341b16a486ff028f0d709401a983c4b60caa37fc14cd403ab22f546-merged.mount: Deactivated successfully.
Dec 05 09:51:13 compute-0 podman[107695]: 2025-12-05 09:51:13.128584043 +0000 UTC m=+0.347406867 container remove dcdb4d6d2076c0bfdb33f5cce59d4dc243b8874769053a63bc8eaaa781431868 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 09:51:13 compute-0 systemd[1]: libpod-conmon-dcdb4d6d2076c0bfdb33f5cce59d4dc243b8874769053a63bc8eaaa781431868.scope: Deactivated successfully.
Dec 05 09:51:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:13 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70003bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:13 compute-0 podman[107735]: 2025-12-05 09:51:13.288546407 +0000 UTC m=+0.045891159 container create 517cea4ed36f13e246f45df175355386175986bc81ce5023cf6f55f1ecbccb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 09:51:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:13.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:13 compute-0 systemd[1]: Started libpod-conmon-517cea4ed36f13e246f45df175355386175986bc81ce5023cf6f55f1ecbccb37.scope.
Dec 05 09:51:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77204768d76a0c1ced463840d9c72de671d7306635801a88fcc64243cb8f6e4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:13 compute-0 podman[107735]: 2025-12-05 09:51:13.27058763 +0000 UTC m=+0.027932412 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77204768d76a0c1ced463840d9c72de671d7306635801a88fcc64243cb8f6e4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77204768d76a0c1ced463840d9c72de671d7306635801a88fcc64243cb8f6e4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77204768d76a0c1ced463840d9c72de671d7306635801a88fcc64243cb8f6e4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77204768d76a0c1ced463840d9c72de671d7306635801a88fcc64243cb8f6e4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:13 compute-0 podman[107735]: 2025-12-05 09:51:13.378492153 +0000 UTC m=+0.135836925 container init 517cea4ed36f13e246f45df175355386175986bc81ce5023cf6f55f1ecbccb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chaum, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Dec 05 09:51:13 compute-0 podman[107735]: 2025-12-05 09:51:13.391119597 +0000 UTC m=+0.148464349 container start 517cea4ed36f13e246f45df175355386175986bc81ce5023cf6f55f1ecbccb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:51:13 compute-0 podman[107735]: 2025-12-05 09:51:13.404541763 +0000 UTC m=+0.161886545 container attach 517cea4ed36f13e246f45df175355386175986bc81ce5023cf6f55f1ecbccb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chaum, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 09:51:13 compute-0 admiring_chaum[107751]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:51:13 compute-0 admiring_chaum[107751]: --> All data devices are unavailable
Dec 05 09:51:13 compute-0 systemd[1]: libpod-517cea4ed36f13e246f45df175355386175986bc81ce5023cf6f55f1ecbccb37.scope: Deactivated successfully.
Dec 05 09:51:13 compute-0 podman[107735]: 2025-12-05 09:51:13.72255303 +0000 UTC m=+0.479897772 container died 517cea4ed36f13e246f45df175355386175986bc81ce5023cf6f55f1ecbccb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chaum, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 09:51:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-77204768d76a0c1ced463840d9c72de671d7306635801a88fcc64243cb8f6e4d-merged.mount: Deactivated successfully.
Dec 05 09:51:13 compute-0 podman[107735]: 2025-12-05 09:51:13.855855636 +0000 UTC m=+0.613200388 container remove 517cea4ed36f13e246f45df175355386175986bc81ce5023cf6f55f1ecbccb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chaum, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:51:13 compute-0 systemd[1]: libpod-conmon-517cea4ed36f13e246f45df175355386175986bc81ce5023cf6f55f1ecbccb37.scope: Deactivated successfully.
Dec 05 09:51:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:13.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:13 compute-0 sudo[107630]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:13 compute-0 sudo[107780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:51:13 compute-0 sudo[107780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:13 compute-0 sudo[107780]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:14 compute-0 sudo[107805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:51:14 compute-0 sudo[107805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 381 B/s rd, 0 op/s
Dec 05 09:51:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:14 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:14 compute-0 podman[107872]: 2025-12-05 09:51:14.511203081 +0000 UTC m=+0.052304209 container create d0b950c3299684f92a0700ca723070b61b23d35eb2135b5fb27ab62aec73f7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nash, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:51:14 compute-0 systemd[1]: Started libpod-conmon-d0b950c3299684f92a0700ca723070b61b23d35eb2135b5fb27ab62aec73f7be.scope.
Dec 05 09:51:14 compute-0 podman[107872]: 2025-12-05 09:51:14.494747724 +0000 UTC m=+0.035848872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:51:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:51:14 compute-0 podman[107872]: 2025-12-05 09:51:14.618147208 +0000 UTC m=+0.159248426 container init d0b950c3299684f92a0700ca723070b61b23d35eb2135b5fb27ab62aec73f7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nash, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:51:14 compute-0 podman[107872]: 2025-12-05 09:51:14.627794694 +0000 UTC m=+0.168895822 container start d0b950c3299684f92a0700ca723070b61b23d35eb2135b5fb27ab62aec73f7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nash, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 09:51:14 compute-0 podman[107872]: 2025-12-05 09:51:14.63064262 +0000 UTC m=+0.171743748 container attach d0b950c3299684f92a0700ca723070b61b23d35eb2135b5fb27ab62aec73f7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nash, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:51:14 compute-0 ecstatic_nash[107889]: 167 167
Dec 05 09:51:14 compute-0 systemd[1]: libpod-d0b950c3299684f92a0700ca723070b61b23d35eb2135b5fb27ab62aec73f7be.scope: Deactivated successfully.
Dec 05 09:51:14 compute-0 podman[107872]: 2025-12-05 09:51:14.633573498 +0000 UTC m=+0.174674646 container died d0b950c3299684f92a0700ca723070b61b23d35eb2135b5fb27ab62aec73f7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 09:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-672b9a687c211429dff1cbf00df686c26f2f491b5c625975c648ba117c5cefa9-merged.mount: Deactivated successfully.
Dec 05 09:51:14 compute-0 podman[107872]: 2025-12-05 09:51:14.677566354 +0000 UTC m=+0.218667482 container remove d0b950c3299684f92a0700ca723070b61b23d35eb2135b5fb27ab62aec73f7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 09:51:14 compute-0 systemd[1]: libpod-conmon-d0b950c3299684f92a0700ca723070b61b23d35eb2135b5fb27ab62aec73f7be.scope: Deactivated successfully.
Dec 05 09:51:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:14 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c002f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:14 compute-0 podman[107913]: 2025-12-05 09:51:14.831095687 +0000 UTC m=+0.049425382 container create bf188fa7caec9df02c5a7b3d57396b34d38e605f635f84f9844dd1176e930dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_payne, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:51:14 compute-0 podman[107913]: 2025-12-05 09:51:14.804679897 +0000 UTC m=+0.023009612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:51:14 compute-0 systemd[1]: Started libpod-conmon-bf188fa7caec9df02c5a7b3d57396b34d38e605f635f84f9844dd1176e930dae.scope.
Dec 05 09:51:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6f7d7c19aa83e625fcf119796c0898be2f4d1d4d4ce71c37139973e2c56189/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6f7d7c19aa83e625fcf119796c0898be2f4d1d4d4ce71c37139973e2c56189/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6f7d7c19aa83e625fcf119796c0898be2f4d1d4d4ce71c37139973e2c56189/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6f7d7c19aa83e625fcf119796c0898be2f4d1d4d4ce71c37139973e2c56189/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:14 compute-0 podman[107913]: 2025-12-05 09:51:14.980264365 +0000 UTC m=+0.198594060 container init bf188fa7caec9df02c5a7b3d57396b34d38e605f635f84f9844dd1176e930dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:51:14 compute-0 podman[107913]: 2025-12-05 09:51:14.987901157 +0000 UTC m=+0.206230852 container start bf188fa7caec9df02c5a7b3d57396b34d38e605f635f84f9844dd1176e930dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_payne, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:51:15 compute-0 podman[107913]: 2025-12-05 09:51:15.009285154 +0000 UTC m=+0.227614869 container attach bf188fa7caec9df02c5a7b3d57396b34d38e605f635f84f9844dd1176e930dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_payne, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 09:51:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:15 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf9c00a6a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:15 compute-0 keen_payne[107929]: {
Dec 05 09:51:15 compute-0 keen_payne[107929]:     "1": [
Dec 05 09:51:15 compute-0 keen_payne[107929]:         {
Dec 05 09:51:15 compute-0 keen_payne[107929]:             "devices": [
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "/dev/loop3"
Dec 05 09:51:15 compute-0 keen_payne[107929]:             ],
Dec 05 09:51:15 compute-0 keen_payne[107929]:             "lv_name": "ceph_lv0",
Dec 05 09:51:15 compute-0 keen_payne[107929]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:51:15 compute-0 keen_payne[107929]:             "lv_size": "21470642176",
Dec 05 09:51:15 compute-0 keen_payne[107929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:51:15 compute-0 keen_payne[107929]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:51:15 compute-0 keen_payne[107929]:             "name": "ceph_lv0",
Dec 05 09:51:15 compute-0 keen_payne[107929]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:51:15 compute-0 keen_payne[107929]:             "tags": {
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.cluster_name": "ceph",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.crush_device_class": "",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.encrypted": "0",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.osd_id": "1",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.type": "block",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.vdo": "0",
Dec 05 09:51:15 compute-0 keen_payne[107929]:                 "ceph.with_tpm": "0"
Dec 05 09:51:15 compute-0 keen_payne[107929]:             },
Dec 05 09:51:15 compute-0 keen_payne[107929]:             "type": "block",
Dec 05 09:51:15 compute-0 keen_payne[107929]:             "vg_name": "ceph_vg0"
Dec 05 09:51:15 compute-0 keen_payne[107929]:         }
Dec 05 09:51:15 compute-0 keen_payne[107929]:     ]
Dec 05 09:51:15 compute-0 keen_payne[107929]: }
Dec 05 09:51:15 compute-0 systemd[1]: libpod-bf188fa7caec9df02c5a7b3d57396b34d38e605f635f84f9844dd1176e930dae.scope: Deactivated successfully.
Dec 05 09:51:15 compute-0 podman[107913]: 2025-12-05 09:51:15.284989088 +0000 UTC m=+0.503318773 container died bf188fa7caec9df02c5a7b3d57396b34d38e605f635f84f9844dd1176e930dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:51:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:15.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d6f7d7c19aa83e625fcf119796c0898be2f4d1d4d4ce71c37139973e2c56189-merged.mount: Deactivated successfully.
Dec 05 09:51:15 compute-0 podman[107913]: 2025-12-05 09:51:15.470342155 +0000 UTC m=+0.688671840 container remove bf188fa7caec9df02c5a7b3d57396b34d38e605f635f84f9844dd1176e930dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_payne, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:51:15 compute-0 systemd[1]: libpod-conmon-bf188fa7caec9df02c5a7b3d57396b34d38e605f635f84f9844dd1176e930dae.scope: Deactivated successfully.
Dec 05 09:51:15 compute-0 sudo[107805]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:15 compute-0 sudo[107950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:51:15 compute-0 sudo[107950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:15 compute-0 sudo[107950]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:15 compute-0 sudo[107975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:51:15 compute-0 sudo[107975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:15] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Dec 05 09:51:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:15] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Dec 05 09:51:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:15.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v48: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 286 B/s rd, 0 op/s
Dec 05 09:51:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:16 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf70003bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:16 compute-0 podman[108040]: 2025-12-05 09:51:16.086401608 +0000 UTC m=+0.025633431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:51:16 compute-0 podman[108040]: 2025-12-05 09:51:16.189346459 +0000 UTC m=+0.128578252 container create 630c89c4fb846c03bc0097b5fab6a8aeacbd0a530587027e0e9708f377b2e898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_easley, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:51:16 compute-0 systemd[1]: Started libpod-conmon-630c89c4fb846c03bc0097b5fab6a8aeacbd0a530587027e0e9708f377b2e898.scope.
Dec 05 09:51:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:51:16 compute-0 podman[108040]: 2025-12-05 09:51:16.436425613 +0000 UTC m=+0.375657466 container init 630c89c4fb846c03bc0097b5fab6a8aeacbd0a530587027e0e9708f377b2e898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_easley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 09:51:16 compute-0 podman[108040]: 2025-12-05 09:51:16.456946128 +0000 UTC m=+0.396177921 container start 630c89c4fb846c03bc0097b5fab6a8aeacbd0a530587027e0e9708f377b2e898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_easley, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 05 09:51:16 compute-0 kind_easley[108057]: 167 167
Dec 05 09:51:16 compute-0 systemd[1]: libpod-630c89c4fb846c03bc0097b5fab6a8aeacbd0a530587027e0e9708f377b2e898.scope: Deactivated successfully.
Dec 05 09:51:16 compute-0 podman[108040]: 2025-12-05 09:51:16.608352614 +0000 UTC m=+0.547584407 container attach 630c89c4fb846c03bc0097b5fab6a8aeacbd0a530587027e0e9708f377b2e898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_easley, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:51:16 compute-0 podman[108040]: 2025-12-05 09:51:16.609380631 +0000 UTC m=+0.548612454 container died 630c89c4fb846c03bc0097b5fab6a8aeacbd0a530587027e0e9708f377b2e898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_easley, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 09:51:16 compute-0 ceph-mon[74418]: pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 381 B/s rd, 0 op/s
Dec 05 09:51:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-198f7367cb96cde4ba66b9c30ac047eb8b726b4daab3668f03fb44d7df82a68d-merged.mount: Deactivated successfully.
Dec 05 09:51:16 compute-0 podman[108040]: 2025-12-05 09:51:16.709520688 +0000 UTC m=+0.648752491 container remove 630c89c4fb846c03bc0097b5fab6a8aeacbd0a530587027e0e9708f377b2e898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_easley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 09:51:16 compute-0 systemd[1]: libpod-conmon-630c89c4fb846c03bc0097b5fab6a8aeacbd0a530587027e0e9708f377b2e898.scope: Deactivated successfully.
Dec 05 09:51:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:16 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf64003a50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:16 compute-0 podman[108081]: 2025-12-05 09:51:16.848402582 +0000 UTC m=+0.029891103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:51:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:51:16.953Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:51:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:51:16.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:51:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[97136]: 05/12/2025 09:51:17 : epoch 6932aa7f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf6c002f60 fd 49 proxy ignored for local
Dec 05 09:51:17 compute-0 kernel: ganesha.nfsd[105544]: segfault at 50 ip 00007fb044cdf32e sp 00007fb00affc210 error 4 in libntirpc.so.5.8[7fb044cc4000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 05 09:51:17 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 05 09:51:17 compute-0 podman[108081]: 2025-12-05 09:51:17.192781697 +0000 UTC m=+0.374270228 container create 27429cf03037b46b661036c75740cfe464732a2be368725143ad584759aa7589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_newton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:51:17 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Dec 05 09:51:17 compute-0 systemd[1]: Started Process Core Dump (PID 108097/UID 0).
Dec 05 09:51:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:17.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:17 compute-0 systemd[1]: Started libpod-conmon-27429cf03037b46b661036c75740cfe464732a2be368725143ad584759aa7589.scope.
Dec 05 09:51:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31fa239824a856851bc7fffb4c14b486c4374dee91d67f6d07110e1110f0373/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31fa239824a856851bc7fffb4c14b486c4374dee91d67f6d07110e1110f0373/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31fa239824a856851bc7fffb4c14b486c4374dee91d67f6d07110e1110f0373/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31fa239824a856851bc7fffb4c14b486c4374dee91d67f6d07110e1110f0373/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:17 compute-0 podman[108081]: 2025-12-05 09:51:17.572354567 +0000 UTC m=+0.753843088 container init 27429cf03037b46b661036c75740cfe464732a2be368725143ad584759aa7589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_newton, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 09:51:17 compute-0 podman[108081]: 2025-12-05 09:51:17.586762759 +0000 UTC m=+0.768251270 container start 27429cf03037b46b661036c75740cfe464732a2be368725143ad584759aa7589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_newton, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 09:51:17 compute-0 podman[108081]: 2025-12-05 09:51:17.619110558 +0000 UTC m=+0.800599059 container attach 27429cf03037b46b661036c75740cfe464732a2be368725143ad584759aa7589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_newton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 09:51:17 compute-0 ceph-mon[74418]: pgmap v48: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 286 B/s rd, 0 op/s
Dec 05 09:51:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:17.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 286 B/s rd, 0 op/s
Dec 05 09:51:18 compute-0 lvm[108178]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:51:18 compute-0 lvm[108178]: VG ceph_vg0 finished
Dec 05 09:51:18 compute-0 vibrant_newton[108101]: {}
Dec 05 09:51:18 compute-0 systemd[1]: libpod-27429cf03037b46b661036c75740cfe464732a2be368725143ad584759aa7589.scope: Deactivated successfully.
Dec 05 09:51:18 compute-0 systemd[1]: libpod-27429cf03037b46b661036c75740cfe464732a2be368725143ad584759aa7589.scope: Consumed 1.444s CPU time.
Dec 05 09:51:18 compute-0 podman[108081]: 2025-12-05 09:51:18.783557497 +0000 UTC m=+1.965046028 container died 27429cf03037b46b661036c75740cfe464732a2be368725143ad584759aa7589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_newton, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:51:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:19.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:51:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:19.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:51:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 190 B/s rd, 0 op/s
Dec 05 09:51:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095121 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:51:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:21.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 190 B/s rd, 0 op/s
Dec 05 09:51:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:23 compute-0 ceph-mon[74418]: pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 286 B/s rd, 0 op/s
Dec 05 09:51:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:23.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:23 compute-0 systemd-coredump[108098]: Process 97140 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 65:
                                                    #0  0x00007fb044cdf32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 05 09:51:23 compute-0 systemd[1]: systemd-coredump@0-108097-0.service: Deactivated successfully.
Dec 05 09:51:23 compute-0 systemd[1]: systemd-coredump@0-108097-0.service: Consumed 1.809s CPU time.
Dec 05 09:51:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a31fa239824a856851bc7fffb4c14b486c4374dee91d67f6d07110e1110f0373-merged.mount: Deactivated successfully.
Dec 05 09:51:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:23.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:23 compute-0 podman[108081]: 2025-12-05 09:51:23.976346943 +0000 UTC m=+7.157835444 container remove 27429cf03037b46b661036c75740cfe464732a2be368725143ad584759aa7589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_newton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 05 09:51:23 compute-0 systemd[1]: libpod-conmon-27429cf03037b46b661036c75740cfe464732a2be368725143ad584759aa7589.scope: Deactivated successfully.
Dec 05 09:51:24 compute-0 sudo[107975]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:51:24 compute-0 podman[108199]: 2025-12-05 09:51:24.055644096 +0000 UTC m=+0.462167961 container died d1ea233284d0d310cc076ca9ad62473a1bc421943ae196b1f9584786262f3156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 05 09:51:24 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-88bfa4c763b8583ae6b894ae1e62989a631d6d04fe9261ab88f6f47e59639de7-merged.mount: Deactivated successfully.
Dec 05 09:51:24 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:24 compute-0 podman[108199]: 2025-12-05 09:51:24.133544843 +0000 UTC m=+0.540068638 container remove d1ea233284d0d310cc076ca9ad62473a1bc421943ae196b1f9584786262f3156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 09:51:24 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Main process exited, code=exited, status=139/n/a
Dec 05 09:51:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:51:24 compute-0 sudo[108212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:51:24 compute-0 sudo[108212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:24 compute-0 sudo[108212]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:24 compute-0 ceph-mon[74418]: pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 190 B/s rd, 0 op/s
Dec 05 09:51:24 compute-0 ceph-mon[74418]: pgmap v51: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 190 B/s rd, 0 op/s
Dec 05 09:51:24 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:24 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:51:24 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Failed with result 'exit-code'.
Dec 05 09:51:24 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 2.111s CPU time.
Dec 05 09:51:25 compute-0 ceph-mon[74418]: pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:51:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:25.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:25 compute-0 sudo[108266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:51:25 compute-0 sudo[108266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:25 compute-0 sudo[108266]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:25] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Dec 05 09:51:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:25] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Dec 05 09:51:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:25.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 05 09:51:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:51:26.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:51:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:27 compute-0 ceph-mon[74418]: pgmap v53: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 05 09:51:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:27.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:51:27
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', '.nfs', 'vms', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr']
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 09:51:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:51:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:51:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:51:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:27.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 05 09:51:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095128 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:51:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:51:28 compute-0 ceph-mgr[74711]: [dashboard INFO request] [192.168.122.100:40702] [POST] [200] [0.003s] [4.0B] [8834eb4c-55bc-4fb3-96b3-06e90f221cc8] /api/prometheus_receiver
Dec 05 09:51:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:29.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:29 compute-0 ceph-mon[74418]: pgmap v54: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 05 09:51:29 compute-0 sudo[104753]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:29.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:51:30 compute-0 ceph-mon[74418]: pgmap v55: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:51:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:31.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:31 compute-0 sudo[108446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdyplmoclzroxogcwlyuaqhywdnwfjjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928291.570821-369-58916009710624/AnsiballZ_command.py'
Dec 05 09:51:31 compute-0 sudo[108446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:31.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:32 compute-0 python3.9[108448]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:51:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:51:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:33 compute-0 sudo[108446]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:33 compute-0 ceph-mon[74418]: pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:51:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:33.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:33 compute-0 sudo[108735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-almgzdculxwfznhbrtqpoclcekakrush ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928293.2831252-393-96216468216942/AnsiballZ_selinux.py'
Dec 05 09:51:33 compute-0 sudo[108735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v57: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:51:34 compute-0 python3.9[108737]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 05 09:51:34 compute-0 sudo[108735]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:34 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Scheduled restart job, restart counter is at 1.
Dec 05 09:51:34 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:51:34 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 2.111s CPU time.
Dec 05 09:51:34 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:51:34 compute-0 podman[108814]: 2025-12-05 09:51:34.633008255 +0000 UTC m=+0.114361207 container create 8b0b7133412abf48eb3fde5627ee2a49adbe1515c69b84fd9c455adf79913da5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:51:34 compute-0 podman[108814]: 2025-12-05 09:51:34.550142574 +0000 UTC m=+0.031495576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b1396a20c3093aa5507217a9fbe97d599b80698d76c52743ab1dcfdb6f81c8/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b1396a20c3093aa5507217a9fbe97d599b80698d76c52743ab1dcfdb6f81c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b1396a20c3093aa5507217a9fbe97d599b80698d76c52743ab1dcfdb6f81c8/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b1396a20c3093aa5507217a9fbe97d599b80698d76c52743ab1dcfdb6f81c8/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hocvro-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:51:34 compute-0 podman[108814]: 2025-12-05 09:51:34.702694402 +0000 UTC m=+0.184047334 container init 8b0b7133412abf48eb3fde5627ee2a49adbe1515c69b84fd9c455adf79913da5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:51:34 compute-0 podman[108814]: 2025-12-05 09:51:34.707809178 +0000 UTC m=+0.189162100 container start 8b0b7133412abf48eb3fde5627ee2a49adbe1515c69b84fd9c455adf79913da5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:51:34 compute-0 bash[108814]: 8b0b7133412abf48eb3fde5627ee2a49adbe1515c69b84fd9c455adf79913da5
Dec 05 09:51:34 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:51:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:34 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 05 09:51:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:34 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 05 09:51:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:34 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 05 09:51:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:34 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 05 09:51:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:34 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 05 09:51:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:34 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 05 09:51:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:34 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 05 09:51:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:34 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:51:34 compute-0 sudo[108996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbigqtnxuxurgwbsjqwyrzxemhzqhib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928294.6796482-426-265515772373227/AnsiballZ_command.py'
Dec 05 09:51:34 compute-0 sudo[108996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:35 compute-0 python3.9[108998]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 05 09:51:35 compute-0 sudo[108996]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:35 compute-0 ceph-mon[74418]: pgmap v57: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:51:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:35.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:35 compute-0 sudo[109148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dymluxvtqsdzlvffatukgguvseqcxcxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928295.3704152-450-184782592683122/AnsiballZ_file.py'
Dec 05 09:51:35 compute-0 sudo[109148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:35] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec 05 09:51:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:35] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec 05 09:51:35 compute-0 python3.9[109150]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:51:35 compute-0 sudo[109148]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:35.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:51:36 compute-0 sudo[109302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erdrneuyapyagfewflkjkwlpuvzyoodn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928296.0952315-474-56815154393651/AnsiballZ_mount.py'
Dec 05 09:51:36 compute-0 sudo[109302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:36 compute-0 python3.9[109304]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 05 09:51:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:51:36.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:51:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:37.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:37.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:51:38 compute-0 ceph-mon[74418]: pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:51:38 compute-0 sudo[109302]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:51:38.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:51:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:39.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:39 compute-0 ceph-mon[74418]: pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:51:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:39.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:39 compute-0 sudo[109456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isfujigsrdqrqfmdidqteakpiphqzeac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928299.7556698-558-169165717200543/AnsiballZ_file.py'
Dec 05 09:51:39 compute-0 sudo[109456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 09:51:40 compute-0 python3.9[109458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:51:40 compute-0 sudo[109456]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:40 compute-0 ceph-mon[74418]: pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 09:51:40 compute-0 sudo[109610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrkrobgitmrtbrpuwbczycxfbtxhtipq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928300.5425742-582-67562255899917/AnsiballZ_stat.py'
Dec 05 09:51:40 compute-0 sudo[109610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:51:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:51:40 compute-0 python3.9[109612]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:51:40 compute-0 sudo[109610]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:41 compute-0 sudo[109688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjtlpdeycygwlayqkyvaqvcofarzbgkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928300.5425742-582-67562255899917/AnsiballZ_file.py'
Dec 05 09:51:41 compute-0 sudo[109688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:41.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:41 compute-0 python3.9[109690]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:51:41 compute-0 sudo[109688]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:41.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v61: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:51:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:51:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:51:42 compute-0 sudo[109842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrhnrlormhvqwcxtgdrchcqewhzsaydz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928302.2972472-645-92744717276477/AnsiballZ_stat.py'
Dec 05 09:51:42 compute-0 ceph-mon[74418]: pgmap v61: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:51:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:51:42 compute-0 sudo[109842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:42 compute-0 python3.9[109844]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000007:nfs.cephfs.2: -2
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:51:42 compute-0 sudo[109842]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 05 09:51:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 09:51:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:43 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2e0000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:43.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095143 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:51:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:43.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:43 compute-0 sudo[110012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooqugvkcwrhqliygmbpudthmxetbtnuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928303.46992-684-188406446989248/AnsiballZ_getent.py'
Dec 05 09:51:43 compute-0 sudo[110012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:44 compute-0 python3.9[110014]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 05 09:51:44 compute-0 sudo[110012]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Dec 05 09:51:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:44 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:44 compute-0 sudo[110167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhjihzvmlowqowhzrvwkvqwrwiahpjli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928304.4530497-714-64998044538337/AnsiballZ_getent.py'
Dec 05 09:51:44 compute-0 sudo[110167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:44 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:44 compute-0 python3.9[110169]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 05 09:51:44 compute-0 sudo[110167]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:45 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:45 compute-0 ceph-mon[74418]: pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Dec 05 09:51:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:45.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:45] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Dec 05 09:51:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:45] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Dec 05 09:51:45 compute-0 sudo[110295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:51:45 compute-0 sudo[110343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrtejjnnifjzfcbfvgehczawooclqtsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928305.179744-738-172134706982924/AnsiballZ_group.py'
Dec 05 09:51:45 compute-0 sudo[110295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:51:45 compute-0 sudo[110343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:45 compute-0 sudo[110295]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:45 compute-0 python3.9[110347]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 09:51:45 compute-0 sudo[110343]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:45.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 05 09:51:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095146 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:51:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:46 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:46 compute-0 sudo[110499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whdvvjsmgcrtlvesfujaqpoffoporbag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928306.1843193-765-163079412557641/AnsiballZ_file.py'
Dec 05 09:51:46 compute-0 sudo[110499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:46 compute-0 python3.9[110501]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 05 09:51:46 compute-0 sudo[110499]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:46 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:51:46.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:51:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:51:46.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:51:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:47 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:47.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:47 compute-0 sudo[110651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmveoncohdyzzwimmfadegeyfgqnmsex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928307.2504377-798-80097722285539/AnsiballZ_dnf.py'
Dec 05 09:51:47 compute-0 sudo[110651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:47.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:47 compute-0 ceph-mon[74418]: pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 05 09:51:48 compute-0 python3.9[110653]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:51:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 05 09:51:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:48 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:48 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:51:48.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:51:48 compute-0 ceph-mon[74418]: pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 05 09:51:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:49 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:49.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:49 compute-0 sudo[110651]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:51:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:49.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:51:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 05 09:51:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:50 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:50 compute-0 sudo[110807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwprjxcsyhmbzpaajpsgwdrceeyubrmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928309.9105954-822-37836012315300/AnsiballZ_file.py'
Dec 05 09:51:50 compute-0 sudo[110807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:50 compute-0 python3.9[110809]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:51:50 compute-0 sudo[110807]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:50 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:50 compute-0 sudo[110960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltxcdeatybjupsaaxtatslxjfolzfdhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928310.6738763-846-138738386897382/AnsiballZ_stat.py'
Dec 05 09:51:50 compute-0 sudo[110960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:51 compute-0 python3.9[110962]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:51:51 compute-0 sudo[110960]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:51 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:51.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:51 compute-0 ceph-mon[74418]: pgmap v65: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 05 09:51:51 compute-0 sudo[111038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tknzwpiktrggzpqrzczvixzdzstfsknw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928310.6738763-846-138738386897382/AnsiballZ_file.py'
Dec 05 09:51:51 compute-0 sudo[111038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:51 compute-0 python3.9[111040]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:51:51 compute-0 sudo[111038]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:51:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:51:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Dec 05 09:51:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:52 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:52 compute-0 sudo[111192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pykozgipojbcexpvwupurjwmpmykbslf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928312.0347514-885-37621237380472/AnsiballZ_stat.py'
Dec 05 09:51:52 compute-0 sudo[111192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:52 compute-0 python3.9[111194]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:51:52 compute-0 sudo[111192]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:52 compute-0 ceph-mon[74418]: pgmap v66: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Dec 05 09:51:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:52 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:52 compute-0 sudo[111270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgsxnwduvugnxfnmrsetszwfhangwuxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928312.0347514-885-37621237380472/AnsiballZ_file.py'
Dec 05 09:51:52 compute-0 sudo[111270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:53 compute-0 python3.9[111272]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:51:53 compute-0 sudo[111270]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:53 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:53.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:53 compute-0 sudo[111422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khzbkvwsqyfhvivrbmrijiiswaysrlbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928313.6446319-930-193042959235652/AnsiballZ_dnf.py'
Dec 05 09:51:53 compute-0 sudo[111422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:53.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:54 compute-0 python3.9[111424]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:51:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 767 B/s wr, 3 op/s
Dec 05 09:51:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:54 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:54 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:55 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:55 compute-0 ceph-mon[74418]: pgmap v67: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 767 B/s wr, 3 op/s
Dec 05 09:51:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:51:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:55.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:51:55 compute-0 sudo[111422]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:55] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Dec 05 09:51:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:51:55] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Dec 05 09:51:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:55.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:51:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:56 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:56 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:51:56.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:51:57 compute-0 python3.9[111579]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:51:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:51:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095157 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:51:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:57 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:57 compute-0 ceph-mon[74418]: pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:51:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:57.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:51:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:51:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:51:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f6850821ca0>)]
Dec 05 09:51:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 05 09:51:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:51:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:51:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:51:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f6850821d30>)]
Dec 05 09:51:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 05 09:51:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:57.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:57 compute-0 python3.9[111731]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 05 09:51:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:51:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:58 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:51:58 compute-0 python3.9[111883]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:51:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:58 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:51:58.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:51:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:51:59 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:51:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:51:59.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:51:59 compute-0 ceph-mon[74418]: pgmap v69: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:51:59 compute-0 sudo[112033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqolmyyqsaypncgfkdpszkhyertsaoub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928319.2372873-1053-92575523660185/AnsiballZ_systemd.py'
Dec 05 09:51:59 compute-0 sudo[112033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:51:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:51:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:51:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:51:59.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s
Dec 05 09:52:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:00 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:00 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:01 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:01 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.hvnxai(active, since 93s), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.291763) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928321291943, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2199, "num_deletes": 251, "total_data_size": 6660773, "memory_usage": 6807664, "flush_reason": "Manual Compaction"}
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 05 09:52:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:01.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928321361085, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6210706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8771, "largest_seqno": 10969, "table_properties": {"data_size": 6199928, "index_size": 6952, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23054, "raw_average_key_size": 21, "raw_value_size": 6177860, "raw_average_value_size": 5641, "num_data_blocks": 305, "num_entries": 1095, "num_filter_entries": 1095, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928199, "oldest_key_time": 1764928199, "file_creation_time": 1764928321, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 69393 microseconds, and 15290 cpu microseconds.
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:52:01 compute-0 ceph-mon[74418]: pgmap v70: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.361172) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6210706 bytes OK
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.361251) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.362793) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.362814) EVENT_LOG_v1 {"time_micros": 1764928321362808, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.362835) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 6650865, prev total WAL file size 6652144, number of live WAL files 2.
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.364492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(6065KB)], [23(11MB)]
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928321364615, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 18361486, "oldest_snapshot_seqno": -1}
Dec 05 09:52:01 compute-0 python3.9[112035]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4165 keys, 14485280 bytes, temperature: kUnknown
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928321606096, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14485280, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14451437, "index_size": 22341, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 106403, "raw_average_key_size": 25, "raw_value_size": 14369222, "raw_average_value_size": 3449, "num_data_blocks": 958, "num_entries": 4165, "num_filter_entries": 4165, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764928321, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.606408) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14485280 bytes
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.613605) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.0 rd, 60.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.9, 11.6 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(5.3) write-amplify(2.3) OK, records in: 4701, records dropped: 536 output_compression: NoCompression
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.613641) EVENT_LOG_v1 {"time_micros": 1764928321613628, "job": 8, "event": "compaction_finished", "compaction_time_micros": 241553, "compaction_time_cpu_micros": 47174, "output_level": 6, "num_output_files": 1, "total_output_size": 14485280, "num_input_records": 4701, "num_output_records": 4165, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928321614685, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928321616857, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.364360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.616912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.616917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.616918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.616919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:52:01 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:52:01.616921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:52:01 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 05 09:52:01 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 05 09:52:01 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 05 09:52:01 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 05 09:52:01 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 05 09:52:01 compute-0 sudo[112033]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s
Dec 05 09:52:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:02 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:02 compute-0 ceph-mon[74418]: mgrmap e30: compute-0.hvnxai(active, since 93s), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 09:52:02 compute-0 python3.9[112200]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 05 09:52:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:02 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:03 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:03.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:03 compute-0 ceph-mon[74418]: pgmap v71: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s
Dec 05 09:52:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:52:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:52:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 341 B/s wr, 0 op/s
Dec 05 09:52:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:04 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:04 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:05 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:52:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:05 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:05.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:05 compute-0 ceph-mon[74418]: pgmap v72: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 341 B/s wr, 0 op/s
Dec 05 09:52:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:05] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Dec 05 09:52:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:05] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Dec 05 09:52:05 compute-0 sudo[112227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:52:05 compute-0 sudo[112227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:05 compute-0 sudo[112227]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:05.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 341 B/s wr, 0 op/s
Dec 05 09:52:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:06 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:06 compute-0 sudo[112379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqhzgzpqriowztngsgwhczyrewlefmjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928326.067008-1224-218466250095644/AnsiballZ_systemd.py'
Dec 05 09:52:06 compute-0 sudo[112379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:06 compute-0 python3.9[112381]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:52:06 compute-0 sudo[112379]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:06 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:06.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:52:07 compute-0 sudo[112533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knxrfurvetyddnycsnmgnsicwzlvsccz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928326.8436844-1224-5893931398504/AnsiballZ_systemd.py'
Dec 05 09:52:07 compute-0 sudo[112533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:07 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:07.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:07 compute-0 python3.9[112535]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:52:07 compute-0 sudo[112533]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095207 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:52:07 compute-0 ceph-mon[74418]: pgmap v73: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 341 B/s wr, 0 op/s
Dec 05 09:52:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:52:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:07.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:52:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 341 B/s wr, 0 op/s
Dec 05 09:52:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:08 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:08 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:52:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:08 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:52:08 compute-0 sshd-session[100770]: Connection closed by 192.168.122.30 port 60574
Dec 05 09:52:08 compute-0 sshd-session[100730]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:52:08 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Dec 05 09:52:08 compute-0 systemd[1]: session-38.scope: Consumed 1min 8.019s CPU time.
Dec 05 09:52:08 compute-0 systemd-logind[789]: Session 38 logged out. Waiting for processes to exit.
Dec 05 09:52:08 compute-0 systemd-logind[789]: Removed session 38.
Dec 05 09:52:08 compute-0 ceph-mon[74418]: pgmap v74: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 341 B/s wr, 0 op/s
Dec 05 09:52:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:08 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:08.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:52:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:09 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:09.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:09.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v75: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Dec 05 09:52:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:10 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:10 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:11 compute-0 ceph-mon[74418]: pgmap v75: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Dec 05 09:52:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:11 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:11.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:11.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v76: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:52:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:12 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:52:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:52:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:12 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:52:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:12 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:52:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:12 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:13 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:13 compute-0 ceph-mon[74418]: pgmap v76: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:52:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:52:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:13.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:13.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Dec 05 09:52:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:14 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:14 compute-0 sshd-session[112570]: Accepted publickey for zuul from 192.168.122.30 port 41534 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:52:14 compute-0 systemd-logind[789]: New session 40 of user zuul.
Dec 05 09:52:14 compute-0 systemd[1]: Started Session 40 of User zuul.
Dec 05 09:52:14 compute-0 sshd-session[112570]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:52:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:14 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:15 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:15.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:15] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Dec 05 09:52:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:15] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Dec 05 09:52:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:15.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:16 compute-0 python3.9[112725]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:52:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v78: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 09:52:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:16 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:16 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:16.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:52:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:17 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:17 compute-0 sudo[112881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwzrfjdcphsqqibansdfpnlsbyfocsqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928336.9076726-68-254468604006373/AnsiballZ_getent.py'
Dec 05 09:52:17 compute-0 sudo[112881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:17.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:17 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 09:52:17 compute-0 ceph-mon[74418]: pgmap v77: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Dec 05 09:52:17 compute-0 ceph-mon[74418]: pgmap v78: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 09:52:17 compute-0 python3.9[112883]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 05 09:52:17 compute-0 sudo[112881]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:17.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 09:52:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:18 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:18 compute-0 sudo[113036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trrvvgfgnvculcirxbutibmreflszfnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928338.002212-104-191624384522393/AnsiballZ_setup.py'
Dec 05 09:52:18 compute-0 sudo[113036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:18 compute-0 python3.9[113038]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:52:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:18 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:52:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:18 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:18.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:52:18 compute-0 sudo[113036]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:19 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:19.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:19 compute-0 ceph-mon[74418]: pgmap v79: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 09:52:19 compute-0 sudo[113120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvpzecukiianeathksnigqmqtmhglyzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928338.002212-104-191624384522393/AnsiballZ_dnf.py'
Dec 05 09:52:19 compute-0 sudo[113120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:19 compute-0 python3.9[113122]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:52:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:52:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:19.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:52:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:52:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:20 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:20 compute-0 ceph-mon[74418]: pgmap v80: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:52:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:20 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095221 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:52:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:21 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:21.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:21 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:52:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:21 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:52:21 compute-0 sudo[113120]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:21.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 09:52:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:22 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:22 compute-0 sudo[113277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqhqkornmiyvwjtfuktyjvsnrbhbhwde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928342.2682638-146-157179732033318/AnsiballZ_dnf.py'
Dec 05 09:52:22 compute-0 sudo[113277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:22 compute-0 python3.9[113279]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:52:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:22 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:23 compute-0 ceph-mon[74418]: pgmap v81: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 09:52:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:23 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:23.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:23.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 05 09:52:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:24 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:24 compute-0 sudo[113277]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:24 compute-0 sudo[113307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:52:24 compute-0 sudo[113307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:24 compute-0 sudo[113307]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:24 compute-0 sudo[113332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 09:52:24 compute-0 sudo[113332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:24 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 09:52:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:24 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:25 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:25 compute-0 ceph-mon[74418]: pgmap v82: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 05 09:52:25 compute-0 podman[113482]: 2025-12-05 09:52:25.30941618 +0000 UTC m=+0.342388546 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:52:25 compute-0 sudo[113575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azzhpevwuqcejrgrzgbjlqtmldkbseao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928344.684394-170-182309590135550/AnsiballZ_systemd.py'
Dec 05 09:52:25 compute-0 sudo[113575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:25.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:25 compute-0 podman[113482]: 2025-12-05 09:52:25.439739384 +0000 UTC m=+0.472711730 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 05 09:52:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:25] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Dec 05 09:52:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:25] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Dec 05 09:52:25 compute-0 python3.9[113577]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:52:25 compute-0 sudo[113575]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:25 compute-0 sudo[113660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:52:25 compute-0 sudo[113660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:25 compute-0 sudo[113660]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:25 compute-0 podman[113728]: 2025-12-05 09:52:25.920399505 +0000 UTC m=+0.053878112 container exec 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:52:25 compute-0 podman[113728]: 2025-12-05 09:52:25.933613378 +0000 UTC m=+0.067091955 container exec_died 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:52:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 09:52:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:26 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:26 compute-0 podman[113873]: 2025-12-05 09:52:26.267878264 +0000 UTC m=+0.053217073 container exec 8b0b7133412abf48eb3fde5627ee2a49adbe1515c69b84fd9c455adf79913da5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 09:52:26 compute-0 podman[113873]: 2025-12-05 09:52:26.282754742 +0000 UTC m=+0.068093521 container exec_died 8b0b7133412abf48eb3fde5627ee2a49adbe1515c69b84fd9c455adf79913da5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 05 09:52:26 compute-0 podman[114009]: 2025-12-05 09:52:26.488285487 +0000 UTC m=+0.055797742 container exec d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 09:52:26 compute-0 podman[114009]: 2025-12-05 09:52:26.513309385 +0000 UTC m=+0.080821620 container exec_died d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 09:52:26 compute-0 python3.9[113996]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:52:26 compute-0 podman[114071]: 2025-12-05 09:52:26.714085314 +0000 UTC m=+0.054911059 container exec f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20)
Dec 05 09:52:26 compute-0 podman[114071]: 2025-12-05 09:52:26.734657293 +0000 UTC m=+0.075482978 container exec_died f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, vendor=Red Hat, Inc., description=keepalived for Ceph, architecture=x86_64, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec 05 09:52:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:26 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:26 compute-0 podman[114163]: 2025-12-05 09:52:26.932842512 +0000 UTC m=+0.044930342 container exec a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:52:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:26.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:52:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:26.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:52:26 compute-0 podman[114163]: 2025-12-05 09:52:26.968586398 +0000 UTC m=+0.080674228 container exec_died a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:52:27 compute-0 podman[114289]: 2025-12-05 09:52:27.219033323 +0000 UTC m=+0.047944493 container exec 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:52:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:27 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:27 compute-0 ceph-mon[74418]: pgmap v83: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 09:52:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:27.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:52:27
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', 'cephfs.cephfs.meta', 'vms', '.nfs', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'volumes']
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:52:27 compute-0 podman[114289]: 2025-12-05 09:52:27.461144637 +0000 UTC m=+0.290055887 container exec_died 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 09:52:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:52:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:52:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095227 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:52:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:52:27 compute-0 sudo[114477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvgptnpothmlfwiqeuwniuzotlkxhjpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928347.0839076-224-193296731529410/AnsiballZ_sefcontext.py'
Dec 05 09:52:27 compute-0 sudo[114477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:27 compute-0 podman[114472]: 2025-12-05 09:52:27.822811486 +0000 UTC m=+0.053101601 container exec 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:52:27 compute-0 podman[114472]: 2025-12-05 09:52:27.873614364 +0000 UTC m=+0.103904449 container exec_died 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:52:27 compute-0 sudo[113332]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:52:27 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:52:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:27.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:28 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:28 compute-0 sudo[114521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:52:28 compute-0 sudo[114521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:28 compute-0 sudo[114521]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:28 compute-0 sudo[114548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:52:28 compute-0 sudo[114548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:28 compute-0 python3.9[114487]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 05 09:52:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v84: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 09:52:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:28 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:28 compute-0 sudo[114477]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:52:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:28 compute-0 sudo[114548]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:52:28 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:52:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:52:28 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:52:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 05 09:52:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:52:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:28 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:28.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:52:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:29 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:29 compute-0 python3.9[114756]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:52:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:52:29 compute-0 ceph-mon[74418]: pgmap v84: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 09:52:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:52:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:52:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:29.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:52:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:52:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:52:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:52:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:52:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:52:29 compute-0 sudo[114761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:52:29 compute-0 sudo[114761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:29 compute-0 sudo[114761]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:29 compute-0 sudo[114786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:52:29 compute-0 sudo[114786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:29 compute-0 podman[114926]: 2025-12-05 09:52:29.947488289 +0000 UTC m=+0.045940969 container create 7db006aa958d54087fcc1e0dcd8e03ee0bbbc26fd7d82714db3f7893fa04ecc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_booth, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 05 09:52:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:29 compute-0 systemd[1]: Started libpod-conmon-7db006aa958d54087fcc1e0dcd8e03ee0bbbc26fd7d82714db3f7893fa04ecc6.scope.
Dec 05 09:52:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:29.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:52:30 compute-0 podman[114926]: 2025-12-05 09:52:29.92916995 +0000 UTC m=+0.027622650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:52:30 compute-0 podman[114926]: 2025-12-05 09:52:30.036463538 +0000 UTC m=+0.134916248 container init 7db006aa958d54087fcc1e0dcd8e03ee0bbbc26fd7d82714db3f7893fa04ecc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_booth, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 09:52:30 compute-0 podman[114926]: 2025-12-05 09:52:30.051702346 +0000 UTC m=+0.150155026 container start 7db006aa958d54087fcc1e0dcd8e03ee0bbbc26fd7d82714db3f7893fa04ecc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_booth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 09:52:30 compute-0 podman[114926]: 2025-12-05 09:52:30.055467766 +0000 UTC m=+0.153920456 container attach 7db006aa958d54087fcc1e0dcd8e03ee0bbbc26fd7d82714db3f7893fa04ecc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_booth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:52:30 compute-0 sharp_booth[114975]: 167 167
Dec 05 09:52:30 compute-0 systemd[1]: libpod-7db006aa958d54087fcc1e0dcd8e03ee0bbbc26fd7d82714db3f7893fa04ecc6.scope: Deactivated successfully.
Dec 05 09:52:30 compute-0 podman[114926]: 2025-12-05 09:52:30.059010881 +0000 UTC m=+0.157463561 container died 7db006aa958d54087fcc1e0dcd8e03ee0bbbc26fd7d82714db3f7893fa04ecc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_booth, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-14ff644f0b74f56f65f82d260c3bc299bbc7487a7fcf89e0d218698bd9c8a8af-merged.mount: Deactivated successfully.
Dec 05 09:52:30 compute-0 sudo[115033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esnqennvaqvhjqfjmykrccoragdbryrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928349.8500347-278-14061393916119/AnsiballZ_dnf.py'
Dec 05 09:52:30 compute-0 sudo[115033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:30 compute-0 podman[114926]: 2025-12-05 09:52:30.10910634 +0000 UTC m=+0.207559020 container remove 7db006aa958d54087fcc1e0dcd8e03ee0bbbc26fd7d82714db3f7893fa04ecc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_booth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:52:30 compute-0 systemd[1]: libpod-conmon-7db006aa958d54087fcc1e0dcd8e03ee0bbbc26fd7d82714db3f7893fa04ecc6.scope: Deactivated successfully.
Dec 05 09:52:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:30 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=infra.usagestats t=2025-12-05T09:52:30.207121201Z level=info msg="Usage stats are ready to report"
Dec 05 09:52:30 compute-0 podman[115046]: 2025-12-05 09:52:30.250294925 +0000 UTC m=+0.040156655 container create edb7f738048bb166adafe510e0e0f3100e5295eb8df6964d3512f49b8f90e0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 09:52:30 compute-0 systemd[1]: Started libpod-conmon-edb7f738048bb166adafe510e0e0f3100e5295eb8df6964d3512f49b8f90e0cd.scope.
Dec 05 09:52:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:52:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a277dc820d3dbbb2c51bbd53db06e924b3c08874badc79496d421e2c81c5f510/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a277dc820d3dbbb2c51bbd53db06e924b3c08874badc79496d421e2c81c5f510/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a277dc820d3dbbb2c51bbd53db06e924b3c08874badc79496d421e2c81c5f510/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:30 compute-0 podman[115046]: 2025-12-05 09:52:30.231609066 +0000 UTC m=+0.021470826 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:52:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a277dc820d3dbbb2c51bbd53db06e924b3c08874badc79496d421e2c81c5f510/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:30 compute-0 python3.9[115038]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:52:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a277dc820d3dbbb2c51bbd53db06e924b3c08874badc79496d421e2c81c5f510/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:30 compute-0 podman[115046]: 2025-12-05 09:52:30.349164979 +0000 UTC m=+0.139026719 container init edb7f738048bb166adafe510e0e0f3100e5295eb8df6964d3512f49b8f90e0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:52:30 compute-0 podman[115046]: 2025-12-05 09:52:30.355881648 +0000 UTC m=+0.145743378 container start edb7f738048bb166adafe510e0e0f3100e5295eb8df6964d3512f49b8f90e0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec 05 09:52:30 compute-0 podman[115046]: 2025-12-05 09:52:30.359170915 +0000 UTC m=+0.149032675 container attach edb7f738048bb166adafe510e0e0f3100e5295eb8df6964d3512f49b8f90e0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:52:30 compute-0 ceph-mon[74418]: pgmap v85: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 05 09:52:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:52:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:52:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:52:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 880 B/s wr, 3 op/s
Dec 05 09:52:30 compute-0 sad_payne[115063]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:52:30 compute-0 sad_payne[115063]: --> All data devices are unavailable
Dec 05 09:52:30 compute-0 systemd[1]: libpod-edb7f738048bb166adafe510e0e0f3100e5295eb8df6964d3512f49b8f90e0cd.scope: Deactivated successfully.
Dec 05 09:52:30 compute-0 podman[115046]: 2025-12-05 09:52:30.729305491 +0000 UTC m=+0.519167241 container died edb7f738048bb166adafe510e0e0f3100e5295eb8df6964d3512f49b8f90e0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 09:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a277dc820d3dbbb2c51bbd53db06e924b3c08874badc79496d421e2c81c5f510-merged.mount: Deactivated successfully.
Dec 05 09:52:30 compute-0 podman[115046]: 2025-12-05 09:52:30.780148541 +0000 UTC m=+0.570010271 container remove edb7f738048bb166adafe510e0e0f3100e5295eb8df6964d3512f49b8f90e0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 09:52:30 compute-0 systemd[1]: libpod-conmon-edb7f738048bb166adafe510e0e0f3100e5295eb8df6964d3512f49b8f90e0cd.scope: Deactivated successfully.
Dec 05 09:52:30 compute-0 sudo[114786]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:30 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:30 compute-0 sudo[115089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:52:30 compute-0 sudo[115089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:30 compute-0 sudo[115089]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:30 compute-0 sudo[115114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:52:30 compute-0 sudo[115114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:31 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:31 compute-0 podman[115177]: 2025-12-05 09:52:31.36214053 +0000 UTC m=+0.049262147 container create bf9903482e285454af1e10334ba068b8d9dbb432f97b693f26631048ced5d3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:52:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:31.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:31 compute-0 systemd[1]: Started libpod-conmon-bf9903482e285454af1e10334ba068b8d9dbb432f97b693f26631048ced5d3ed.scope.
Dec 05 09:52:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:52:31 compute-0 podman[115177]: 2025-12-05 09:52:31.336781612 +0000 UTC m=+0.023903259 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:52:31 compute-0 podman[115177]: 2025-12-05 09:52:31.437345171 +0000 UTC m=+0.124466808 container init bf9903482e285454af1e10334ba068b8d9dbb432f97b693f26631048ced5d3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 09:52:31 compute-0 podman[115177]: 2025-12-05 09:52:31.444712998 +0000 UTC m=+0.131834615 container start bf9903482e285454af1e10334ba068b8d9dbb432f97b693f26631048ced5d3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:52:31 compute-0 vigorous_kirch[115193]: 167 167
Dec 05 09:52:31 compute-0 podman[115177]: 2025-12-05 09:52:31.449933548 +0000 UTC m=+0.137055195 container attach bf9903482e285454af1e10334ba068b8d9dbb432f97b693f26631048ced5d3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:52:31 compute-0 systemd[1]: libpod-bf9903482e285454af1e10334ba068b8d9dbb432f97b693f26631048ced5d3ed.scope: Deactivated successfully.
Dec 05 09:52:31 compute-0 podman[115177]: 2025-12-05 09:52:31.450875583 +0000 UTC m=+0.137997200 container died bf9903482e285454af1e10334ba068b8d9dbb432f97b693f26631048ced5d3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:52:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-287cb554354ab86121d7d894143d1c881b2114e4b0cf58be51daaf5ed269bb43-merged.mount: Deactivated successfully.
Dec 05 09:52:31 compute-0 podman[115177]: 2025-12-05 09:52:31.500954252 +0000 UTC m=+0.188075869 container remove bf9903482e285454af1e10334ba068b8d9dbb432f97b693f26631048ced5d3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 09:52:31 compute-0 systemd[1]: libpod-conmon-bf9903482e285454af1e10334ba068b8d9dbb432f97b693f26631048ced5d3ed.scope: Deactivated successfully.
Dec 05 09:52:31 compute-0 podman[115217]: 2025-12-05 09:52:31.676474215 +0000 UTC m=+0.059876903 container create de529389ed80a01b8ad2d2621011573a96b1c47b7485a2ae5244171aa0fce257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:52:31 compute-0 systemd[1]: Started libpod-conmon-de529389ed80a01b8ad2d2621011573a96b1c47b7485a2ae5244171aa0fce257.scope.
Dec 05 09:52:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b3e52b1c2a4003c76dc1d0a131c81ab8b3d6f040bc7b446b53a3846e590f7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b3e52b1c2a4003c76dc1d0a131c81ab8b3d6f040bc7b446b53a3846e590f7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b3e52b1c2a4003c76dc1d0a131c81ab8b3d6f040bc7b446b53a3846e590f7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b3e52b1c2a4003c76dc1d0a131c81ab8b3d6f040bc7b446b53a3846e590f7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:31 compute-0 podman[115217]: 2025-12-05 09:52:31.647937921 +0000 UTC m=+0.031340619 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:52:31 compute-0 podman[115217]: 2025-12-05 09:52:31.758193649 +0000 UTC m=+0.141596347 container init de529389ed80a01b8ad2d2621011573a96b1c47b7485a2ae5244171aa0fce257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 09:52:31 compute-0 podman[115217]: 2025-12-05 09:52:31.765484983 +0000 UTC m=+0.148887661 container start de529389ed80a01b8ad2d2621011573a96b1c47b7485a2ae5244171aa0fce257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_aryabhata, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:52:31 compute-0 sudo[115033]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:31 compute-0 podman[115217]: 2025-12-05 09:52:31.770102197 +0000 UTC m=+0.153504885 container attach de529389ed80a01b8ad2d2621011573a96b1c47b7485a2ae5244171aa0fce257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 09:52:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:31.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]: {
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:     "1": [
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:         {
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             "devices": [
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "/dev/loop3"
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             ],
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             "lv_name": "ceph_lv0",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             "lv_size": "21470642176",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             "name": "ceph_lv0",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             "tags": {
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.cluster_name": "ceph",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.crush_device_class": "",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.encrypted": "0",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.osd_id": "1",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.type": "block",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.vdo": "0",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:                 "ceph.with_tpm": "0"
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             },
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             "type": "block",
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:             "vg_name": "ceph_vg0"
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:         }
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]:     ]
Dec 05 09:52:32 compute-0 nostalgic_aryabhata[115233]: }
Dec 05 09:52:32 compute-0 systemd[1]: libpod-de529389ed80a01b8ad2d2621011573a96b1c47b7485a2ae5244171aa0fce257.scope: Deactivated successfully.
Dec 05 09:52:32 compute-0 podman[115217]: 2025-12-05 09:52:32.059121384 +0000 UTC m=+0.442524072 container died de529389ed80a01b8ad2d2621011573a96b1c47b7485a2ae5244171aa0fce257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_aryabhata, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:52:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2b3e52b1c2a4003c76dc1d0a131c81ab8b3d6f040bc7b446b53a3846e590f7b-merged.mount: Deactivated successfully.
Dec 05 09:52:32 compute-0 podman[115217]: 2025-12-05 09:52:32.119985562 +0000 UTC m=+0.503388250 container remove de529389ed80a01b8ad2d2621011573a96b1c47b7485a2ae5244171aa0fce257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:52:32 compute-0 systemd[1]: libpod-conmon-de529389ed80a01b8ad2d2621011573a96b1c47b7485a2ae5244171aa0fce257.scope: Deactivated successfully.
Dec 05 09:52:32 compute-0 sudo[115114]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:32 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:32 compute-0 sudo[115300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:52:32 compute-0 sudo[115300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:32 compute-0 sudo[115300]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:32 compute-0 sudo[115352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:52:32 compute-0 sudo[115352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:32 compute-0 ceph-mon[74418]: pgmap v86: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 880 B/s wr, 3 op/s
Dec 05 09:52:32 compute-0 sudo[115496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niyowdpbqskhlejgahaxbcgrxwsecied ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928352.1886492-302-258549301400010/AnsiballZ_command.py'
Dec 05 09:52:32 compute-0 sudo[115496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 880 B/s wr, 3 op/s
Dec 05 09:52:32 compute-0 podman[115500]: 2025-12-05 09:52:32.682407328 +0000 UTC m=+0.045097827 container create 722e145dd36be5d1a9b730aca3d84e01b04fc2d300c805e307ad7fd3f162b8cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Dec 05 09:52:32 compute-0 systemd[1]: Started libpod-conmon-722e145dd36be5d1a9b730aca3d84e01b04fc2d300c805e307ad7fd3f162b8cb.scope.
Dec 05 09:52:32 compute-0 podman[115500]: 2025-12-05 09:52:32.662867536 +0000 UTC m=+0.025558055 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:52:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:52:32 compute-0 podman[115500]: 2025-12-05 09:52:32.78045807 +0000 UTC m=+0.143148599 container init 722e145dd36be5d1a9b730aca3d84e01b04fc2d300c805e307ad7fd3f162b8cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:52:32 compute-0 podman[115500]: 2025-12-05 09:52:32.788664409 +0000 UTC m=+0.151354908 container start 722e145dd36be5d1a9b730aca3d84e01b04fc2d300c805e307ad7fd3f162b8cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rhodes, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 09:52:32 compute-0 podman[115500]: 2025-12-05 09:52:32.792582804 +0000 UTC m=+0.155273303 container attach 722e145dd36be5d1a9b730aca3d84e01b04fc2d300c805e307ad7fd3f162b8cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rhodes, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 09:52:32 compute-0 systemd[1]: libpod-722e145dd36be5d1a9b730aca3d84e01b04fc2d300c805e307ad7fd3f162b8cb.scope: Deactivated successfully.
Dec 05 09:52:32 compute-0 dazzling_rhodes[115516]: 167 167
Dec 05 09:52:32 compute-0 conmon[115516]: conmon 722e145dd36be5d1a9b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-722e145dd36be5d1a9b730aca3d84e01b04fc2d300c805e307ad7fd3f162b8cb.scope/container/memory.events
Dec 05 09:52:32 compute-0 podman[115500]: 2025-12-05 09:52:32.796354564 +0000 UTC m=+0.159045083 container died 722e145dd36be5d1a9b730aca3d84e01b04fc2d300c805e307ad7fd3f162b8cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:52:32 compute-0 python3.9[115499]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:52:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a93e3ad59ea2c237930d2c7fc9231d5084eb4e0af847763a29f60a6e028fd63f-merged.mount: Deactivated successfully.
Dec 05 09:52:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:32 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:32 compute-0 podman[115500]: 2025-12-05 09:52:32.838980533 +0000 UTC m=+0.201671032 container remove 722e145dd36be5d1a9b730aca3d84e01b04fc2d300c805e307ad7fd3f162b8cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:52:32 compute-0 systemd[1]: libpod-conmon-722e145dd36be5d1a9b730aca3d84e01b04fc2d300c805e307ad7fd3f162b8cb.scope: Deactivated successfully.
Dec 05 09:52:33 compute-0 podman[115545]: 2025-12-05 09:52:33.033022672 +0000 UTC m=+0.046331830 container create c00f095ac3693ca8a699d0687d5a831fa798ad144b2b43d3e97354cb915874c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 09:52:33 compute-0 systemd[1]: Started libpod-conmon-c00f095ac3693ca8a699d0687d5a831fa798ad144b2b43d3e97354cb915874c9.scope.
Dec 05 09:52:33 compute-0 podman[115545]: 2025-12-05 09:52:33.011511536 +0000 UTC m=+0.024820724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:52:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e233858d80db43373bc4cbb2189151ad0c05819e02e100358827bc915c35d987/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e233858d80db43373bc4cbb2189151ad0c05819e02e100358827bc915c35d987/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e233858d80db43373bc4cbb2189151ad0c05819e02e100358827bc915c35d987/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e233858d80db43373bc4cbb2189151ad0c05819e02e100358827bc915c35d987/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:33 compute-0 podman[115545]: 2025-12-05 09:52:33.147713198 +0000 UTC m=+0.161022376 container init c00f095ac3693ca8a699d0687d5a831fa798ad144b2b43d3e97354cb915874c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:52:33 compute-0 podman[115545]: 2025-12-05 09:52:33.157305414 +0000 UTC m=+0.170614582 container start c00f095ac3693ca8a699d0687d5a831fa798ad144b2b43d3e97354cb915874c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:52:33 compute-0 podman[115545]: 2025-12-05 09:52:33.161499627 +0000 UTC m=+0.174808785 container attach c00f095ac3693ca8a699d0687d5a831fa798ad144b2b43d3e97354cb915874c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Dec 05 09:52:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:33 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:33.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:33 compute-0 sudo[115496]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:33 compute-0 lvm[115788]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:52:33 compute-0 lvm[115788]: VG ceph_vg0 finished
Dec 05 09:52:33 compute-0 sharp_mirzakhani[115561]: {}
Dec 05 09:52:33 compute-0 systemd[1]: libpod-c00f095ac3693ca8a699d0687d5a831fa798ad144b2b43d3e97354cb915874c9.scope: Deactivated successfully.
Dec 05 09:52:33 compute-0 systemd[1]: libpod-c00f095ac3693ca8a699d0687d5a831fa798ad144b2b43d3e97354cb915874c9.scope: Consumed 1.071s CPU time.
Dec 05 09:52:33 compute-0 podman[115545]: 2025-12-05 09:52:33.834005086 +0000 UTC m=+0.847314264 container died c00f095ac3693ca8a699d0687d5a831fa798ad144b2b43d3e97354cb915874c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 09:52:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e233858d80db43373bc4cbb2189151ad0c05819e02e100358827bc915c35d987-merged.mount: Deactivated successfully.
Dec 05 09:52:33 compute-0 podman[115545]: 2025-12-05 09:52:33.878765353 +0000 UTC m=+0.892074511 container remove c00f095ac3693ca8a699d0687d5a831fa798ad144b2b43d3e97354cb915874c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 09:52:33 compute-0 systemd[1]: libpod-conmon-c00f095ac3693ca8a699d0687d5a831fa798ad144b2b43d3e97354cb915874c9.scope: Deactivated successfully.
Dec 05 09:52:33 compute-0 sudo[115352]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:52:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:52:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:33.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:52:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:34 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:34 compute-0 sudo[115931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ericdtdrfbrpfhficsbxwiqxgkobnoay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928353.9174337-326-58949935523604/AnsiballZ_file.py'
Dec 05 09:52:34 compute-0 sudo[115931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:52:34 compute-0 ceph-mon[74418]: pgmap v87: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 880 B/s wr, 3 op/s
Dec 05 09:52:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:34 compute-0 python3.9[115933]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 09:52:34 compute-0 sudo[115931]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:34 compute-0 sudo[115934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:34 compute-0 sudo[115934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:34 compute-0 sudo[115934]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 B/s rd, 0 B/s wr, 0 op/s
Dec 05 09:52:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:34 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:35 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:35.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:35 compute-0 python3.9[116108]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:52:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:52:35 compute-0 ceph-mon[74418]: pgmap v88: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 B/s rd, 0 B/s wr, 0 op/s
Dec 05 09:52:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:35] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Dec 05 09:52:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:35] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Dec 05 09:52:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:35.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:36 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:36 compute-0 sudo[116262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edvdwvvjqapouuemrvuzjcubylxxcwnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928355.7162979-374-216739663686965/AnsiballZ_dnf.py'
Dec 05 09:52:36 compute-0 sudo[116262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:36 compute-0 python3.9[116264]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:52:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 B/s rd, 0 B/s wr, 0 op/s
Dec 05 09:52:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:36 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:36.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:52:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:36.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:52:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:37 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:37.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:37 compute-0 ceph-mon[74418]: pgmap v89: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 B/s rd, 0 B/s wr, 0 op/s
Dec 05 09:52:37 compute-0 sudo[116262]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:38.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:38 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:38 compute-0 sudo[116417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhvivskubxzdvovuzcfakufkifokkfbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928358.2496252-401-231106793193254/AnsiballZ_dnf.py'
Dec 05 09:52:38 compute-0 sudo[116417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 391 B/s rd, 0 B/s wr, 0 op/s
Dec 05 09:52:38 compute-0 python3.9[116419]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:52:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:38 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:38.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:52:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:39 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:39.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:39 compute-0 ceph-mon[74418]: pgmap v90: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 391 B/s rd, 0 B/s wr, 0 op/s
Dec 05 09:52:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:52:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:40.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:52:40 compute-0 sudo[116417]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:40 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:40 compute-0 sudo[116572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwbymvgxemglblsluefimowlsfdecsej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928360.739621-437-93526601159009/AnsiballZ_stat.py'
Dec 05 09:52:40 compute-0 sudo[116572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:41 compute-0 python3.9[116574]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:52:41 compute-0 sudo[116572]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:41 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2d4002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:41.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:41 compute-0 ceph-mon[74418]: pgmap v91: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:41 compute-0 sudo[116726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsxvvvxdgwoujdkuqxpxessvewspxevd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928361.533339-461-98921687982651/AnsiballZ_slurp.py'
Dec 05 09:52:41 compute-0 sudo[116726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:52:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:42.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:42 compute-0 python3.9[116728]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 05 09:52:42 compute-0 sudo[116726]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:52:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:52:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:42 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:52:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:43 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:43.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:43 compute-0 sshd-session[112573]: Connection closed by 192.168.122.30 port 41534
Dec 05 09:52:43 compute-0 sshd-session[112570]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:52:43 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Dec 05 09:52:43 compute-0 systemd[1]: session-40.scope: Consumed 17.983s CPU time.
Dec 05 09:52:43 compute-0 systemd-logind[789]: Session 40 logged out. Waiting for processes to exit.
Dec 05 09:52:43 compute-0 systemd-logind[789]: Removed session 40.
Dec 05 09:52:43 compute-0 ceph-mon[74418]: pgmap v92: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:44.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:44 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v93: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:44 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:45 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:52:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:45.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:52:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:45] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec 05 09:52:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:45] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec 05 09:52:45 compute-0 ceph-mon[74418]: pgmap v93: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:45 compute-0 sudo[116758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:52:45 compute-0 sudo[116758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:52:45 compute-0 sudo[116758]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:46.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:46 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:46 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:46.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:52:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:46.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:52:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:47 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:47.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:52:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:48.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:52:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:48 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:52:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:48 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:48.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:52:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:49 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2a8000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:52:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:49.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:52:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:50.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:50 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:50 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:51 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:51.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:52.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:52 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2a8001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:52 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:53 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:53.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:52:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:54.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:52:54 compute-0 ceph-mon[74418]: pgmap v94: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:54 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:54 compute-0 sshd-session[116794]: Accepted publickey for zuul from 192.168.122.30 port 48352 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:52:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:54 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2a8001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:54 compute-0 systemd-logind[789]: New session 41 of user zuul.
Dec 05 09:52:54 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec 05 09:52:54 compute-0 sshd-session[116794]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:52:55 compute-0 ceph-mon[74418]: pgmap v95: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:52:55 compute-0 ceph-mon[74418]: pgmap v96: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:55 compute-0 ceph-mon[74418]: pgmap v97: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:55 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc002ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:55.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:55] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec 05 09:52:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:52:55] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec 05 09:52:55 compute-0 python3.9[116947]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:52:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:56.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:56 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:56 compute-0 ceph-mon[74418]: pgmap v98: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:56 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2c4004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:56.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:52:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:57 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2a8001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:52:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:52:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:57.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:52:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:52:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:52:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:52:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:52:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:52:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:52:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:52:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:52:57 compute-0 python3.9[117103]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:52:57 compute-0 ceph-mon[74418]: pgmap v99: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:52:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:52:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:52:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:52:58.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:52:58 compute-0 kernel: ganesha.nfsd[116757]: segfault at 50 ip 00007fa38d2dc32e sp 00007fa344ff8210 error 4 in libntirpc.so.5.8[7fa38d2c1000+2c000] likely on CPU 0 (core 0, socket 0)
Dec 05 09:52:58 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 05 09:52:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[108852]: 05/12/2025 09:52:58 : epoch 6932ab26 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2bc002ec0 fd 39 proxy ignored for local
Dec 05 09:52:58 compute-0 systemd[1]: Started Process Core Dump (PID 117173/UID 0).
Dec 05 09:52:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:52:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:52:58.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:52:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:52:59 compute-0 python3.9[117300]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:52:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:52:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:52:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:52:59.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:52:59 compute-0 systemd-coredump[117174]: Process 108868 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007fa38d2dc32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 05 09:52:59 compute-0 systemd[1]: systemd-coredump@1-117173-0.service: Deactivated successfully.
Dec 05 09:52:59 compute-0 systemd[1]: systemd-coredump@1-117173-0.service: Consumed 1.265s CPU time.
Dec 05 09:52:59 compute-0 podman[117330]: 2025-12-05 09:52:59.62180602 +0000 UTC m=+0.027697192 container died 8b0b7133412abf48eb3fde5627ee2a49adbe1515c69b84fd9c455adf79913da5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:52:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-43b1396a20c3093aa5507217a9fbe97d599b80698d76c52743ab1dcfdb6f81c8-merged.mount: Deactivated successfully.
Dec 05 09:52:59 compute-0 systemd[92054]: Created slice User Background Tasks Slice.
Dec 05 09:52:59 compute-0 systemd[92054]: Starting Cleanup of User's Temporary Files and Directories...
Dec 05 09:52:59 compute-0 podman[117330]: 2025-12-05 09:52:59.665757425 +0000 UTC m=+0.071648577 container remove 8b0b7133412abf48eb3fde5627ee2a49adbe1515c69b84fd9c455adf79913da5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:52:59 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Main process exited, code=exited, status=139/n/a
Dec 05 09:52:59 compute-0 systemd[92054]: Finished Cleanup of User's Temporary Files and Directories.
Dec 05 09:52:59 compute-0 sshd-session[116797]: Connection closed by 192.168.122.30 port 48352
Dec 05 09:52:59 compute-0 sshd-session[116794]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:52:59 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec 05 09:52:59 compute-0 systemd[1]: session-41.scope: Consumed 2.251s CPU time.
Dec 05 09:52:59 compute-0 systemd-logind[789]: Session 41 logged out. Waiting for processes to exit.
Dec 05 09:52:59 compute-0 systemd-logind[789]: Removed session 41.
Dec 05 09:52:59 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Failed with result 'exit-code'.
Dec 05 09:52:59 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 1.694s CPU time.
Dec 05 09:53:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:00.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:01.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:01 compute-0 ceph-mon[74418]: pgmap v100: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:53:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:02.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:03 compute-0 ceph-mon[74418]: pgmap v101: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:03.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:04.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095304 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:53:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v103: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:05.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:05] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Dec 05 09:53:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:05] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Dec 05 09:53:06 compute-0 sudo[117380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:53:06 compute-0 sudo[117380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:06 compute-0 sudo[117380]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:06.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:53:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:06.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:53:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:07.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:53:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:08.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:53:08 compute-0 ceph-mon[74418]: pgmap v102: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:08 compute-0 ceph-mon[74418]: pgmap v103: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 09:53:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:08.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:53:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:08.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:53:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:09.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:09 compute-0 sshd-session[117409]: Accepted publickey for zuul from 192.168.122.30 port 56254 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:53:09 compute-0 systemd-logind[789]: New session 42 of user zuul.
Dec 05 09:53:09 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec 05 09:53:09 compute-0 sshd-session[117409]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:53:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:10.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:10 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Scheduled restart job, restart counter is at 2.
Dec 05 09:53:10 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:53:10 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 1.694s CPU time.
Dec 05 09:53:10 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:53:10 compute-0 podman[117561]: 2025-12-05 09:53:10.278706225 +0000 UTC m=+0.022962897 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:53:10 compute-0 python3.9[117625]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:53:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v106: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:53:10 compute-0 podman[117561]: 2025-12-05 09:53:10.755872099 +0000 UTC m=+0.500128741 container create 972a80b3db4ec87a0de4970a0c44d162b66da273f0d7ab070e3c06feb681d206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 09:53:10 compute-0 ceph-mon[74418]: pgmap v104: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c16d39076c205eb771d8e7457ee83dc0aa0ee695ce4e2f4047f9aa94182e506/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c16d39076c205eb771d8e7457ee83dc0aa0ee695ce4e2f4047f9aa94182e506/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c16d39076c205eb771d8e7457ee83dc0aa0ee695ce4e2f4047f9aa94182e506/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c16d39076c205eb771d8e7457ee83dc0aa0ee695ce4e2f4047f9aa94182e506/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hocvro-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:11.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:11 compute-0 python3.9[117784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:53:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:12.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:12 compute-0 sudo[117940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhtfosfejzuijgrqjzhnlzrddvraawdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928392.2934172-80-226237597264644/AnsiballZ_setup.py'
Dec 05 09:53:12 compute-0 sudo[117940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:53:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:13.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:14.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:53:14 compute-0 python3.9[117942]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:53:14 compute-0 sudo[117940]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:53:14 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:53:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:53:14 compute-0 sudo[118026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcnsvwbqecpjijgjquredytylxwdngea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928392.2934172-80-226237597264644/AnsiballZ_dnf.py'
Dec 05 09:53:14 compute-0 sudo[118026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:14 compute-0 podman[117561]: 2025-12-05 09:53:14.981902401 +0000 UTC m=+4.726159063 container init 972a80b3db4ec87a0de4970a0c44d162b66da273f0d7ab070e3c06feb681d206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:53:14 compute-0 podman[117561]: 2025-12-05 09:53:14.990988228 +0000 UTC m=+4.735244870 container start 972a80b3db4ec87a0de4970a0c44d162b66da273f0d7ab070e3c06feb681d206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 09:53:14 compute-0 python3.9[118028]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:53:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:14 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 05 09:53:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:14 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 05 09:53:15 compute-0 bash[117561]: 972a80b3db4ec87a0de4970a0c44d162b66da273f0d7ab070e3c06feb681d206
Dec 05 09:53:15 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:53:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:15.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:15 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 05 09:53:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:15 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 05 09:53:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:15 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 05 09:53:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:15 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 05 09:53:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:15 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 05 09:53:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:15 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:53:15 compute-0 ceph-mon[74418]: pgmap v105: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 09:53:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:15] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Dec 05 09:53:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:15] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Dec 05 09:53:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:16.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:16 compute-0 sudo[118026]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:53:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:16.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:53:17 compute-0 ceph-mon[74418]: pgmap v106: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:53:17 compute-0 ceph-mon[74418]: pgmap v107: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:53:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:53:17 compute-0 ceph-mon[74418]: pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:53:17 compute-0 sudo[118220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmsmnauzcvbonjbooemxjtuxwpujruds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928396.7300413-116-279002994051647/AnsiballZ_setup.py'
Dec 05 09:53:17 compute-0 sudo[118220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:17.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:17 compute-0 python3.9[118222]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:53:17 compute-0 sudo[118220]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:18.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:53:18 compute-0 ceph-mon[74418]: pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:53:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:18.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:53:18 compute-0 sudo[118417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzypasbchyscwchetyizaksvjijhczw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928398.3288128-149-107996912403266/AnsiballZ_file.py'
Dec 05 09:53:18 compute-0 sudo[118417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:53:19 compute-0 python3.9[118419]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:53:19 compute-0 sudo[118417]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:19.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:19 compute-0 sudo[118569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwpfnhqhmtpnsusbhcihyytgmapwjhsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928399.3899336-173-104561605723100/AnsiballZ_command.py'
Dec 05 09:53:19 compute-0 sudo[118569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:20 compute-0 python3.9[118571]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:53:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:20.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:20 compute-0 sudo[118569]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:53:20 compute-0 ceph-mon[74418]: pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:53:20 compute-0 sudo[118736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnmbeivhhmzgrevenczuhqtbtuylxpsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928400.4512954-197-122673415263951/AnsiballZ_stat.py'
Dec 05 09:53:20 compute-0 sudo[118736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:21 compute-0 python3.9[118738]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:53:21 compute-0 sudo[118736]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:21.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:21 compute-0 sudo[118814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxbraxbvhiogshdysgmsmzjebhxwppfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928400.4512954-197-122673415263951/AnsiballZ_file.py'
Dec 05 09:53:21 compute-0 sudo[118814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:21 compute-0 python3.9[118816]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:53:21 compute-0 sudo[118814]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:21 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:53:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:21 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:53:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:22.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:22 compute-0 sudo[118968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaeamuhnmpjqtrmwvyvpuflpywpvjqer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928402.1720014-233-269067798004567/AnsiballZ_stat.py'
Dec 05 09:53:22 compute-0 sudo[118968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:53:22 compute-0 python3.9[118970]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:53:22 compute-0 sudo[118968]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:22 compute-0 sudo[119046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vefoelbmpgsfbqukjkwtzlhcqmycxejy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928402.1720014-233-269067798004567/AnsiballZ_file.py'
Dec 05 09:53:22 compute-0 sudo[119046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:23 compute-0 python3.9[119048]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:53:23 compute-0 sudo[119046]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:23.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:24.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:53:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:53:24 compute-0 sudo[119200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sacllvxujpinwfwdbktbrivemmokktzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928404.187548-272-117217422100478/AnsiballZ_ini_file.py'
Dec 05 09:53:24 compute-0 sudo[119200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:24 compute-0 python3.9[119202]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:53:24 compute-0 sudo[119200]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:25 compute-0 ceph-mon[74418]: pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:53:25 compute-0 sudo[119352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgdgtxdpbaacrsaapulxayrgwcmlywdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928405.090222-272-25362878476243/AnsiballZ_ini_file.py'
Dec 05 09:53:25 compute-0 sudo[119352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:25.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:25] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Dec 05 09:53:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:25] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Dec 05 09:53:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:53:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.7 total, 600.0 interval
                                           Cumulative writes: 2355 writes, 11K keys, 2355 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2355 writes, 2355 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2355 writes, 11K keys, 2355 commit groups, 1.0 writes per commit group, ingest: 21.64 MB, 0.04 MB/s
                                           Interval WAL: 2355 writes, 2355 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     23.4      0.77              0.14         4    0.193       0      0       0.0       0.0
                                             L6      1/0   13.81 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     59.6     53.5      0.69              0.20         3    0.230     12K   1364       0.0       0.0
                                            Sum      1/0   13.81 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.0     28.1     37.6      1.46              0.34         7    0.208     12K   1364       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.1     41.2     55.0      1.00              0.34         6    0.166     12K   1364       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     59.6     53.5      0.69              0.20         3    0.230     12K   1364       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     58.5      0.31              0.14         3    0.102       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.46              0.00         1    0.463       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.7 total, 600.0 interval
                                           Flush(GB): cumulative 0.018, interval 0.018
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.09 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.5 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585d4f19350#2 capacity: 304.00 MB usage: 1.14 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(76,1.01 MB,0.330714%) FilterBlock(8,45.67 KB,0.0146715%) IndexBlock(8,94.20 KB,0.0302616%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 05 09:53:26 compute-0 python3.9[119354]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:53:26 compute-0 sudo[119352]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:26.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:26 compute-0 sudo[119356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:53:26 compute-0 sudo[119356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:26 compute-0 sudo[119356]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:26 compute-0 ceph-mon[74418]: pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:53:26 compute-0 ceph-mon[74418]: pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:53:26 compute-0 sudo[119531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbzpxnmxyguiyojyjoudznqlehlwflvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928406.1821256-272-165476737290976/AnsiballZ_ini_file.py'
Dec 05 09:53:26 compute-0 sudo[119531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:26 compute-0 python3.9[119533]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:53:26 compute-0 sudo[119531]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:53:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:26.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:53:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:26.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:53:26 compute-0 sudo[119683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fstjvhncrrkgjsybbcfjmdupjxxzkbei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928406.7589836-272-238110781955900/AnsiballZ_ini_file.py'
Dec 05 09:53:27 compute-0 sudo[119683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:27 compute-0 python3.9[119685]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:53:27 compute-0 sudo[119683]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:53:27
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'backups', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.nfs']
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:53:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:27.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 09:53:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:53:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:53:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:53:27 compute-0 sudo[119835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqcrmctleypohfsajyzafkerawubhmnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928407.5348823-365-91064685745999/AnsiballZ_dnf.py'
Dec 05 09:53:27 compute-0 sudo[119835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 05 09:53:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 09:53:28 compute-0 python3.9[119837]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:53:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:28.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:28 compute-0 ceph-mon[74418]: pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:53:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:53:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:28 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc618000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 05 09:53:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:28.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:53:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:28.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:53:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:28 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6040016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:53:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:29 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:53:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:29.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:53:29 compute-0 sudo[119835]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:30.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:30 compute-0 ceph-mon[74418]: pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 05 09:53:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095330 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:53:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:30 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:30 compute-0 sudo[120007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrwqayndbdoqvnekyrfodaprsqsdhtzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928410.1930234-398-242250550065194/AnsiballZ_setup.py'
Dec 05 09:53:30 compute-0 sudo[120007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:53:30 compute-0 python3.9[120009]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:53:30 compute-0 sudo[120007]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:30 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:31 compute-0 sudo[120161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soytltxllvzqsvifusssgvhgsffcbwlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928411.1419659-422-200413639081997/AnsiballZ_stat.py'
Dec 05 09:53:31 compute-0 sudo[120161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:31 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6040016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:31.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:31 compute-0 python3.9[120163]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:53:31 compute-0 sudo[120161]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:32.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:32 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:32 compute-0 sudo[120315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsoutcxhhgbvjiaykyephzjhdwgugzxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928411.984684-449-173441038966251/AnsiballZ_stat.py'
Dec 05 09:53:32 compute-0 sudo[120315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:32 compute-0 python3.9[120317]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:53:32 compute-0 sudo[120315]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:53:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:32 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:33 compute-0 sudo[120467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vofugcwcoajzuycmkynauetayfzdydgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928412.801648-479-280549482400727/AnsiballZ_command.py'
Dec 05 09:53:33 compute-0 sudo[120467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:33 compute-0 python3.9[120469]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:53:33 compute-0 sudo[120467]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:33 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6100025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:33.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:34.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:34 compute-0 sudo[120621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oenejwszjhsrbdsiivndrfohrrlyorvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928413.7421043-509-87504961538915/AnsiballZ_service_facts.py'
Dec 05 09:53:34 compute-0 sudo[120621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:34 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6040016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:53:34 compute-0 python3.9[120624]: ansible-service_facts Invoked
Dec 05 09:53:34 compute-0 network[120641]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:53:34 compute-0 network[120642]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:53:34 compute-0 network[120643]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:53:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:53:34 compute-0 ceph-mon[74418]: pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:53:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:34 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:35 compute-0 sudo[120649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:53:35 compute-0 sudo[120649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:35 compute-0 sudo[120649]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:35 compute-0 sudo[120675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 09:53:35 compute-0 sudo[120675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:35 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:35.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:35] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 09:53:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:35] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 09:53:35 compute-0 ceph-mon[74418]: pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:53:35 compute-0 ceph-mon[74418]: pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:53:35 compute-0 podman[120774]: 2025-12-05 09:53:35.810156879 +0000 UTC m=+0.061960518 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:53:35 compute-0 podman[120774]: 2025-12-05 09:53:35.915633962 +0000 UTC m=+0.167437581 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:53:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:36.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:36 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:36 compute-0 podman[120892]: 2025-12-05 09:53:36.314224666 +0000 UTC m=+0.053105228 container exec 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:53:36 compute-0 podman[120892]: 2025-12-05 09:53:36.348683174 +0000 UTC m=+0.087563736 container exec_died 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:53:36 compute-0 podman[120997]: 2025-12-05 09:53:36.653281079 +0000 UTC m=+0.054121705 container exec 972a80b3db4ec87a0de4970a0c44d162b66da273f0d7ab070e3c06feb681d206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:53:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:53:36 compute-0 podman[120997]: 2025-12-05 09:53:36.666674654 +0000 UTC m=+0.067515260 container exec_died 972a80b3db4ec87a0de4970a0c44d162b66da273f0d7ab070e3c06feb681d206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:53:36 compute-0 podman[121070]: 2025-12-05 09:53:36.862375473 +0000 UTC m=+0.047678139 container exec d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 09:53:36 compute-0 podman[121070]: 2025-12-05 09:53:36.871952504 +0000 UTC m=+0.057255140 container exec_died d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 09:53:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:36 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6040016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:36.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:53:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:36.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:53:37 compute-0 podman[121146]: 2025-12-05 09:53:37.084398939 +0000 UTC m=+0.054455224 container exec f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vcs-type=git, name=keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20)
Dec 05 09:53:37 compute-0 podman[121146]: 2025-12-05 09:53:37.100726384 +0000 UTC m=+0.070782669 container exec_died f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, distribution-scope=public, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, architecture=x86_64, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Dec 05 09:53:37 compute-0 podman[121224]: 2025-12-05 09:53:37.315591634 +0000 UTC m=+0.049685623 container exec a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:53:37 compute-0 podman[121224]: 2025-12-05 09:53:37.352745607 +0000 UTC m=+0.086839596 container exec_died a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:53:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:37 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:37.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:37 compute-0 podman[121312]: 2025-12-05 09:53:37.558437498 +0000 UTC m=+0.053405316 container exec 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:53:37 compute-0 podman[121312]: 2025-12-05 09:53:37.763279936 +0000 UTC m=+0.258247754 container exec_died 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 09:53:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:38.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:38 compute-0 sudo[120621]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:38 compute-0 ceph-mon[74418]: pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:53:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:38 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:38 compute-0 podman[121486]: 2025-12-05 09:53:38.434311699 +0000 UTC m=+0.046188459 container exec 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:53:38 compute-0 podman[121486]: 2025-12-05 09:53:38.496224765 +0000 UTC m=+0.108101525 container exec_died 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:53:38 compute-0 sudo[120675]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:53:38 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:53:38 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:38 compute-0 sudo[121529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:53:38 compute-0 sudo[121529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:38 compute-0 sudo[121529]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:53:38 compute-0 sudo[121554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:53:38 compute-0 sudo[121554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:38.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:53:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:38.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:53:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:38 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:39 compute-0 sudo[121554]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:53:39 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:53:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:53:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:53:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:53:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:53:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:53:39 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:53:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:53:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:53:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:53:39 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:53:39 compute-0 sudo[121611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:53:39 compute-0 sudo[121611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:39 compute-0 sudo[121611]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:39 compute-0 sudo[121636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:53:39 compute-0 sudo[121636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:53:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:39 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 05 09:53:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:39.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 05 09:53:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:39 compute-0 ceph-mon[74418]: pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:53:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:53:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:53:39 compute-0 ceph-mon[74418]: pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:53:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:53:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:53:39 compute-0 podman[121703]: 2025-12-05 09:53:39.750581744 +0000 UTC m=+0.046159009 container create 74710ff6e6b5d5c8ad43d2b1f677a60b3e5a9f2767cbc7162caae1e90be4e34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_leakey, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 09:53:39 compute-0 systemd[1]: Started libpod-conmon-74710ff6e6b5d5c8ad43d2b1f677a60b3e5a9f2767cbc7162caae1e90be4e34c.scope.
Dec 05 09:53:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:53:39 compute-0 podman[121703]: 2025-12-05 09:53:39.823189221 +0000 UTC m=+0.118766506 container init 74710ff6e6b5d5c8ad43d2b1f677a60b3e5a9f2767cbc7162caae1e90be4e34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:53:39 compute-0 podman[121703]: 2025-12-05 09:53:39.731954276 +0000 UTC m=+0.027531561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:53:39 compute-0 podman[121703]: 2025-12-05 09:53:39.829467982 +0000 UTC m=+0.125045247 container start 74710ff6e6b5d5c8ad43d2b1f677a60b3e5a9f2767cbc7162caae1e90be4e34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_leakey, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 09:53:39 compute-0 podman[121703]: 2025-12-05 09:53:39.833046299 +0000 UTC m=+0.128623574 container attach 74710ff6e6b5d5c8ad43d2b1f677a60b3e5a9f2767cbc7162caae1e90be4e34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_leakey, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 09:53:39 compute-0 lucid_leakey[121719]: 167 167
Dec 05 09:53:39 compute-0 systemd[1]: libpod-74710ff6e6b5d5c8ad43d2b1f677a60b3e5a9f2767cbc7162caae1e90be4e34c.scope: Deactivated successfully.
Dec 05 09:53:39 compute-0 podman[121703]: 2025-12-05 09:53:39.83640353 +0000 UTC m=+0.131980815 container died 74710ff6e6b5d5c8ad43d2b1f677a60b3e5a9f2767cbc7162caae1e90be4e34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:53:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-58e8b2fd3c4655437bdf0d42f1b4aa9dd911a4683c90209b34fea186ee95f8dc-merged.mount: Deactivated successfully.
Dec 05 09:53:39 compute-0 podman[121703]: 2025-12-05 09:53:39.884437168 +0000 UTC m=+0.180014433 container remove 74710ff6e6b5d5c8ad43d2b1f677a60b3e5a9f2767cbc7162caae1e90be4e34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:53:39 compute-0 systemd[1]: libpod-conmon-74710ff6e6b5d5c8ad43d2b1f677a60b3e5a9f2767cbc7162caae1e90be4e34c.scope: Deactivated successfully.
Dec 05 09:53:40 compute-0 podman[121746]: 2025-12-05 09:53:40.039211804 +0000 UTC m=+0.040709850 container create 2b29128f645a20e1d5047ac0a497b56a1584e7241d8c125e830bd4a3adb5762a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 05 09:53:40 compute-0 systemd[1]: Started libpod-conmon-2b29128f645a20e1d5047ac0a497b56a1584e7241d8c125e830bd4a3adb5762a.scope.
Dec 05 09:53:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:53:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ebf9240f359cfa7375ad26f0f6061ac518ef8424f6284fd31f4e901c7d2ec5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ebf9240f359cfa7375ad26f0f6061ac518ef8424f6284fd31f4e901c7d2ec5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ebf9240f359cfa7375ad26f0f6061ac518ef8424f6284fd31f4e901c7d2ec5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ebf9240f359cfa7375ad26f0f6061ac518ef8424f6284fd31f4e901c7d2ec5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ebf9240f359cfa7375ad26f0f6061ac518ef8424f6284fd31f4e901c7d2ec5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:40 compute-0 podman[121746]: 2025-12-05 09:53:40.022573171 +0000 UTC m=+0.024071247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:53:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:40.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:40 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:40 compute-0 podman[121746]: 2025-12-05 09:53:40.43164093 +0000 UTC m=+0.433139016 container init 2b29128f645a20e1d5047ac0a497b56a1584e7241d8c125e830bd4a3adb5762a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gagarin, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 09:53:40 compute-0 podman[121746]: 2025-12-05 09:53:40.439889585 +0000 UTC m=+0.441387671 container start 2b29128f645a20e1d5047ac0a497b56a1584e7241d8c125e830bd4a3adb5762a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 05 09:53:40 compute-0 podman[121746]: 2025-12-05 09:53:40.44449572 +0000 UTC m=+0.445993806 container attach 2b29128f645a20e1d5047ac0a497b56a1584e7241d8c125e830bd4a3adb5762a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gagarin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:53:40 compute-0 infallible_gagarin[121764]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:53:40 compute-0 infallible_gagarin[121764]: --> All data devices are unavailable
Dec 05 09:53:40 compute-0 systemd[1]: libpod-2b29128f645a20e1d5047ac0a497b56a1584e7241d8c125e830bd4a3adb5762a.scope: Deactivated successfully.
Dec 05 09:53:40 compute-0 podman[121746]: 2025-12-05 09:53:40.771333161 +0000 UTC m=+0.772831207 container died 2b29128f645a20e1d5047ac0a497b56a1584e7241d8c125e830bd4a3adb5762a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gagarin, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 09:53:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6ebf9240f359cfa7375ad26f0f6061ac518ef8424f6284fd31f4e901c7d2ec5-merged.mount: Deactivated successfully.
Dec 05 09:53:40 compute-0 podman[121746]: 2025-12-05 09:53:40.824764426 +0000 UTC m=+0.826262472 container remove 2b29128f645a20e1d5047ac0a497b56a1584e7241d8c125e830bd4a3adb5762a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gagarin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:53:40 compute-0 systemd[1]: libpod-conmon-2b29128f645a20e1d5047ac0a497b56a1584e7241d8c125e830bd4a3adb5762a.scope: Deactivated successfully.
Dec 05 09:53:40 compute-0 sudo[121636]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:40 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:40 compute-0 sudo[121793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:53:40 compute-0 sudo[121793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:40 compute-0 sudo[121793]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:40 compute-0 sudo[121818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:53:40 compute-0 sudo[121818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:41 compute-0 podman[121885]: 2025-12-05 09:53:41.339994236 +0000 UTC m=+0.042014765 container create dde1c80c82f60db0880e2cfa87066662c70c8b3de7d39962ce30fd6c08ba89ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 09:53:41 compute-0 systemd[1]: Started libpod-conmon-dde1c80c82f60db0880e2cfa87066662c70c8b3de7d39962ce30fd6c08ba89ca.scope.
Dec 05 09:53:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:41 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:53:41 compute-0 podman[121885]: 2025-12-05 09:53:41.323020794 +0000 UTC m=+0.025041353 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:53:41 compute-0 podman[121885]: 2025-12-05 09:53:41.419925483 +0000 UTC m=+0.121946042 container init dde1c80c82f60db0880e2cfa87066662c70c8b3de7d39962ce30fd6c08ba89ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:53:41 compute-0 podman[121885]: 2025-12-05 09:53:41.428826835 +0000 UTC m=+0.130847364 container start dde1c80c82f60db0880e2cfa87066662c70c8b3de7d39962ce30fd6c08ba89ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 05 09:53:41 compute-0 podman[121885]: 2025-12-05 09:53:41.432379542 +0000 UTC m=+0.134400071 container attach dde1c80c82f60db0880e2cfa87066662c70c8b3de7d39962ce30fd6c08ba89ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_taussig, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:53:41 compute-0 objective_taussig[121914]: 167 167
Dec 05 09:53:41 compute-0 systemd[1]: libpod-dde1c80c82f60db0880e2cfa87066662c70c8b3de7d39962ce30fd6c08ba89ca.scope: Deactivated successfully.
Dec 05 09:53:41 compute-0 podman[121885]: 2025-12-05 09:53:41.435162118 +0000 UTC m=+0.137182667 container died dde1c80c82f60db0880e2cfa87066662c70c8b3de7d39962ce30fd6c08ba89ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:53:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:41.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9246d92f8578a897a74c57d1d32cf01798933bbfaf91ec1c452a554889e9b276-merged.mount: Deactivated successfully.
Dec 05 09:53:41 compute-0 podman[121885]: 2025-12-05 09:53:41.606332359 +0000 UTC m=+0.308352898 container remove dde1c80c82f60db0880e2cfa87066662c70c8b3de7d39962ce30fd6c08ba89ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_taussig, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 09:53:41 compute-0 systemd[1]: libpod-conmon-dde1c80c82f60db0880e2cfa87066662c70c8b3de7d39962ce30fd6c08ba89ca.scope: Deactivated successfully.
Dec 05 09:53:41 compute-0 sudo[122068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctzxdcvmebvxxjhclreuijwzyxdeajfb ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764928421.401039-554-260106331543074/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764928421.401039-554-260106331543074/args'
Dec 05 09:53:41 compute-0 sudo[122068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:41 compute-0 podman[122078]: 2025-12-05 09:53:41.752731105 +0000 UTC m=+0.040467192 container create f5c03c421af191b6cd139916da5e8c0f79dcf91f0fe26fc4082493eb6fd27d29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:53:41 compute-0 sudo[122068]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:41 compute-0 systemd[1]: Started libpod-conmon-f5c03c421af191b6cd139916da5e8c0f79dcf91f0fe26fc4082493eb6fd27d29.scope.
Dec 05 09:53:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f77eed00c69df96b0ed3d47a35264250e10e490f8af53844962ef7901e89f68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f77eed00c69df96b0ed3d47a35264250e10e490f8af53844962ef7901e89f68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f77eed00c69df96b0ed3d47a35264250e10e490f8af53844962ef7901e89f68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f77eed00c69df96b0ed3d47a35264250e10e490f8af53844962ef7901e89f68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:41 compute-0 podman[122078]: 2025-12-05 09:53:41.828663183 +0000 UTC m=+0.116399290 container init f5c03c421af191b6cd139916da5e8c0f79dcf91f0fe26fc4082493eb6fd27d29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:53:41 compute-0 podman[122078]: 2025-12-05 09:53:41.735822545 +0000 UTC m=+0.023558652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:53:41 compute-0 podman[122078]: 2025-12-05 09:53:41.836950419 +0000 UTC m=+0.124686496 container start f5c03c421af191b6cd139916da5e8c0f79dcf91f0fe26fc4082493eb6fd27d29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:53:41 compute-0 podman[122078]: 2025-12-05 09:53:41.840457725 +0000 UTC m=+0.128193872 container attach f5c03c421af191b6cd139916da5e8c0f79dcf91f0fe26fc4082493eb6fd27d29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]: {
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:     "1": [
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:         {
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             "devices": [
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "/dev/loop3"
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             ],
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             "lv_name": "ceph_lv0",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             "lv_size": "21470642176",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             "name": "ceph_lv0",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             "tags": {
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.cluster_name": "ceph",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.crush_device_class": "",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.encrypted": "0",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.osd_id": "1",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.type": "block",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.vdo": "0",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:                 "ceph.with_tpm": "0"
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             },
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             "type": "block",
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:             "vg_name": "ceph_vg0"
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:         }
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]:     ]
Dec 05 09:53:42 compute-0 wonderful_wescoff[122107]: }
Dec 05 09:53:42 compute-0 systemd[1]: libpod-f5c03c421af191b6cd139916da5e8c0f79dcf91f0fe26fc4082493eb6fd27d29.scope: Deactivated successfully.
Dec 05 09:53:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:42.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:42 compute-0 podman[122141]: 2025-12-05 09:53:42.160991833 +0000 UTC m=+0.024659312 container died f5c03c421af191b6cd139916da5e8c0f79dcf91f0fe26fc4082493eb6fd27d29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 09:53:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f77eed00c69df96b0ed3d47a35264250e10e490f8af53844962ef7901e89f68-merged.mount: Deactivated successfully.
Dec 05 09:53:42 compute-0 podman[122141]: 2025-12-05 09:53:42.197423775 +0000 UTC m=+0.061091234 container remove f5c03c421af191b6cd139916da5e8c0f79dcf91f0fe26fc4082493eb6fd27d29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 09:53:42 compute-0 systemd[1]: libpod-conmon-f5c03c421af191b6cd139916da5e8c0f79dcf91f0fe26fc4082493eb6fd27d29.scope: Deactivated successfully.
Dec 05 09:53:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:42 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:42 compute-0 sudo[121818]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:42 compute-0 ceph-mon[74418]: pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:42 compute-0 sudo[122205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:53:42 compute-0 sudo[122205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:42 compute-0 sudo[122205]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:42 compute-0 sudo[122254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:53:42 compute-0 sudo[122254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:42 compute-0 sudo[122332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhhlxalcrkabzdntsreficlxpyqqanst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928422.2132556-587-203241414897819/AnsiballZ_dnf.py'
Dec 05 09:53:42 compute-0 sudo[122332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:53:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:53:42 compute-0 python3.9[122334]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:53:42 compute-0 podman[122378]: 2025-12-05 09:53:42.757153268 +0000 UTC m=+0.037636787 container create 1d9ac25a48b1bfc301b5edc5c0f9f286b9721dc7c17b1f1a75a3db6dd805bc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kowalevski, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:53:42 compute-0 systemd[1]: Started libpod-conmon-1d9ac25a48b1bfc301b5edc5c0f9f286b9721dc7c17b1f1a75a3db6dd805bc4c.scope.
Dec 05 09:53:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:53:42 compute-0 podman[122378]: 2025-12-05 09:53:42.82518428 +0000 UTC m=+0.105667789 container init 1d9ac25a48b1bfc301b5edc5c0f9f286b9721dc7c17b1f1a75a3db6dd805bc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:53:42 compute-0 podman[122378]: 2025-12-05 09:53:42.832390687 +0000 UTC m=+0.112874176 container start 1d9ac25a48b1bfc301b5edc5c0f9f286b9721dc7c17b1f1a75a3db6dd805bc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 05 09:53:42 compute-0 podman[122378]: 2025-12-05 09:53:42.739669602 +0000 UTC m=+0.020153121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:53:42 compute-0 tender_kowalevski[122394]: 167 167
Dec 05 09:53:42 compute-0 podman[122378]: 2025-12-05 09:53:42.836473688 +0000 UTC m=+0.116957207 container attach 1d9ac25a48b1bfc301b5edc5c0f9f286b9721dc7c17b1f1a75a3db6dd805bc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kowalevski, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 09:53:42 compute-0 systemd[1]: libpod-1d9ac25a48b1bfc301b5edc5c0f9f286b9721dc7c17b1f1a75a3db6dd805bc4c.scope: Deactivated successfully.
Dec 05 09:53:42 compute-0 conmon[122394]: conmon 1d9ac25a48b1bfc301b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d9ac25a48b1bfc301b5edc5c0f9f286b9721dc7c17b1f1a75a3db6dd805bc4c.scope/container/memory.events
Dec 05 09:53:42 compute-0 podman[122378]: 2025-12-05 09:53:42.838141533 +0000 UTC m=+0.118625052 container died 1d9ac25a48b1bfc301b5edc5c0f9f286b9721dc7c17b1f1a75a3db6dd805bc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kowalevski, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:53:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-197c0d3cd83e0125c58d6a2adbf29fb0e1647fd97ccfe6efc5e9ad3b14ee12ba-merged.mount: Deactivated successfully.
Dec 05 09:53:42 compute-0 podman[122378]: 2025-12-05 09:53:42.888356951 +0000 UTC m=+0.168840440 container remove 1d9ac25a48b1bfc301b5edc5c0f9f286b9721dc7c17b1f1a75a3db6dd805bc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:53:42 compute-0 systemd[1]: libpod-conmon-1d9ac25a48b1bfc301b5edc5c0f9f286b9721dc7c17b1f1a75a3db6dd805bc4c.scope: Deactivated successfully.
Dec 05 09:53:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:42 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:43 compute-0 podman[122419]: 2025-12-05 09:53:43.077780479 +0000 UTC m=+0.052882812 container create 5e461e32a07ba2b78f27f6a27cdbc74466d278cd8590cec55cd5041b930c343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lichterman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:53:43 compute-0 systemd[1]: Started libpod-conmon-5e461e32a07ba2b78f27f6a27cdbc74466d278cd8590cec55cd5041b930c343d.scope.
Dec 05 09:53:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1f99f01e0e80d67ee7852ff4e40cd856173488353e1133439e756facf0edc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1f99f01e0e80d67ee7852ff4e40cd856173488353e1133439e756facf0edc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:43 compute-0 podman[122419]: 2025-12-05 09:53:43.053277172 +0000 UTC m=+0.028379525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1f99f01e0e80d67ee7852ff4e40cd856173488353e1133439e756facf0edc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1f99f01e0e80d67ee7852ff4e40cd856173488353e1133439e756facf0edc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:43 compute-0 podman[122419]: 2025-12-05 09:53:43.155692791 +0000 UTC m=+0.130795154 container init 5e461e32a07ba2b78f27f6a27cdbc74466d278cd8590cec55cd5041b930c343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lichterman, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 09:53:43 compute-0 podman[122419]: 2025-12-05 09:53:43.172228521 +0000 UTC m=+0.147330854 container start 5e461e32a07ba2b78f27f6a27cdbc74466d278cd8590cec55cd5041b930c343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:53:43 compute-0 podman[122419]: 2025-12-05 09:53:43.176814666 +0000 UTC m=+0.151917019 container attach 5e461e32a07ba2b78f27f6a27cdbc74466d278cd8590cec55cd5041b930c343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lichterman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 09:53:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:43 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:53:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:43.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:43 compute-0 lvm[122510]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:53:43 compute-0 lvm[122510]: VG ceph_vg0 finished
Dec 05 09:53:43 compute-0 nervous_lichterman[122435]: {}
Dec 05 09:53:43 compute-0 systemd[1]: libpod-5e461e32a07ba2b78f27f6a27cdbc74466d278cd8590cec55cd5041b930c343d.scope: Deactivated successfully.
Dec 05 09:53:43 compute-0 systemd[1]: libpod-5e461e32a07ba2b78f27f6a27cdbc74466d278cd8590cec55cd5041b930c343d.scope: Consumed 1.029s CPU time.
Dec 05 09:53:43 compute-0 podman[122419]: 2025-12-05 09:53:43.870126726 +0000 UTC m=+0.845229059 container died 5e461e32a07ba2b78f27f6a27cdbc74466d278cd8590cec55cd5041b930c343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lichterman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:53:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e1f99f01e0e80d67ee7852ff4e40cd856173488353e1133439e756facf0edc1-merged.mount: Deactivated successfully.
Dec 05 09:53:43 compute-0 podman[122419]: 2025-12-05 09:53:43.923349665 +0000 UTC m=+0.898451998 container remove 5e461e32a07ba2b78f27f6a27cdbc74466d278cd8590cec55cd5041b930c343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lichterman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 09:53:43 compute-0 systemd[1]: libpod-conmon-5e461e32a07ba2b78f27f6a27cdbc74466d278cd8590cec55cd5041b930c343d.scope: Deactivated successfully.
Dec 05 09:53:43 compute-0 sudo[122254]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:53:43 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:53:44 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:44 compute-0 sudo[122332]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:44 compute-0 sudo[122528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:53:44 compute-0 sudo[122528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:44 compute-0 sudo[122528]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:44.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:44 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:53:44 compute-0 ceph-mon[74418]: pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:44 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:44 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:53:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:44 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:45 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec001b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:45 compute-0 sudo[122704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvbgofflwisjjelpvidyheuqwedoliuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928424.8918555-626-235548702403807/AnsiballZ_package_facts.py'
Dec 05 09:53:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:45.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:45 compute-0 sudo[122704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:45] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:53:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:45] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:53:45 compute-0 python3.9[122706]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 05 09:53:45 compute-0 sudo[122704]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:53:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:46.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:53:46 compute-0 sudo[122732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:53:46 compute-0 sudo[122732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:53:46 compute-0 sudo[122732]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:46 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:46 compute-0 ceph-mon[74418]: pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:46 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:46.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:53:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:46.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:53:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:46.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:53:47 compute-0 sudo[122883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnushjmrbujpvenmocajzcsxxnlstswm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928426.761705-656-216318936811981/AnsiballZ_stat.py'
Dec 05 09:53:47 compute-0 sudo[122883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:47 compute-0 python3.9[122885]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:53:47 compute-0 sudo[122883]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:47 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:47.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:47 compute-0 sudo[122961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upinriaayyjcycuqhaxqvqqnvstaznku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928426.761705-656-216318936811981/AnsiballZ_file.py'
Dec 05 09:53:47 compute-0 sudo[122961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:47 compute-0 ceph-mon[74418]: pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:47 compute-0 python3.9[122963]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:53:47 compute-0 sudo[122961]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:48.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:48 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec001b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:48 compute-0 sudo[123115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hinyztpvjtikkugjccltwisalbdwazty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928428.203931-692-214217996011988/AnsiballZ_stat.py'
Dec 05 09:53:48 compute-0 sudo[123115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:48 compute-0 python3.9[123117]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:53:48 compute-0 sudo[123115]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:48.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:53:48 compute-0 sudo[123193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnzzxwuevgtvpnmncrsymuixjzmcgjoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928428.203931-692-214217996011988/AnsiballZ_file.py'
Dec 05 09:53:48 compute-0 sudo[123193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:48 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:49 compute-0 python3.9[123195]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:53:49 compute-0 sudo[123193]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:53:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:49 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:49.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:53:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:50.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:50 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:50 compute-0 ceph-mon[74418]: pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Dec 05 09:53:50 compute-0 sudo[123347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-entdockbpbawzzdqcoiopncxrjfcryqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928430.2335255-746-143303201652899/AnsiballZ_lineinfile.py'
Dec 05 09:53:50 compute-0 sudo[123347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:50 compute-0 python3.9[123349]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:53:50 compute-0 sudo[123347]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:50 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec001b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:51 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 09:53:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:51.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 09:53:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:52.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:52 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:52 compute-0 ceph-mon[74418]: pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:52 compute-0 sudo[123501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqvnquuwoqypqpiatkvgubxwnykbuztp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928432.0566201-791-200105923831907/AnsiballZ_setup.py'
Dec 05 09:53:52 compute-0 sudo[123501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:52 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:52 compute-0 python3.9[123503]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:53:53 compute-0 sudo[123501]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:53 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:53.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:53 compute-0 sudo[123585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liydohvrkaybpiikshjovwjqovlirorv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928432.0566201-791-200105923831907/AnsiballZ_systemd.py'
Dec 05 09:53:53 compute-0 sudo[123585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:53:54 compute-0 python3.9[123587]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:53:54 compute-0 sudo[123585]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:54.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:54 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:53:54 compute-0 ceph-mon[74418]: pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:54 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:55 compute-0 sshd-session[117412]: Connection closed by 192.168.122.30 port 56254
Dec 05 09:53:55 compute-0 sshd-session[117409]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:53:55 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec 05 09:53:55 compute-0 systemd[1]: session-42.scope: Consumed 23.797s CPU time.
Dec 05 09:53:55 compute-0 systemd-logind[789]: Session 42 logged out. Waiting for processes to exit.
Dec 05 09:53:55 compute-0 systemd-logind[789]: Removed session 42.
Dec 05 09:53:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:55 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:55.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:55] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:53:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:53:55] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:53:55 compute-0 ceph-mon[74418]: pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:56.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:56 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:56 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:56.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:53:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:57 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:57.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:53:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:53:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:53:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:53:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:53:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:53:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:53:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:53:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:53:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:53:58.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:53:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:58 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:58 compute-0 ceph-mon[74418]: pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:53:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:53:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:53:58.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:53:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:58 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:53:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:53:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:53:59 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:53:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:53:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:53:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:53:59.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:00.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:00 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:00 compute-0 ceph-mon[74418]: pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:54:00 compute-0 sshd-session[123623]: Accepted publickey for zuul from 192.168.122.30 port 39338 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:54:00 compute-0 systemd-logind[789]: New session 43 of user zuul.
Dec 05 09:54:00 compute-0 systemd[1]: Started Session 43 of User zuul.
Dec 05 09:54:00 compute-0 sshd-session[123623]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:54:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:00 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:01 compute-0 anacron[4305]: Job `cron.monthly' started
Dec 05 09:54:01 compute-0 anacron[4305]: Job `cron.monthly' terminated
Dec 05 09:54:01 compute-0 anacron[4305]: Normal exit (3 jobs run)
Dec 05 09:54:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:01 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:01 compute-0 sudo[123778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzygubgffmvnafecszratsmwnwuwbwqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928440.9884405-26-15685487954705/AnsiballZ_file.py'
Dec 05 09:54:01 compute-0 sudo[123778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:01.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:01 compute-0 python3.9[123780]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:01 compute-0 sudo[123778]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:02.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:02 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:02 compute-0 sudo[123932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waptczdkoczorcddgxjrnyopjrjoyasj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928441.9116983-62-10366393291745/AnsiballZ_stat.py'
Dec 05 09:54:02 compute-0 sudo[123932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:02 compute-0 ceph-mon[74418]: pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:02 compute-0 python3.9[123934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:02 compute-0 sudo[123932]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:02 compute-0 sudo[124010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdvmtbohsmbgddpmlbotipppiwytudok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928441.9116983-62-10366393291745/AnsiballZ_file.py'
Dec 05 09:54:02 compute-0 sudo[124010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:02 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:03 compute-0 python3.9[124012]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:03 compute-0 sudo[124010]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:03 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:03 compute-0 sshd-session[123626]: Connection closed by 192.168.122.30 port 39338
Dec 05 09:54:03 compute-0 sshd-session[123623]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:54:03 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Dec 05 09:54:03 compute-0 systemd[1]: session-43.scope: Consumed 1.549s CPU time.
Dec 05 09:54:03 compute-0 systemd-logind[789]: Session 43 logged out. Waiting for processes to exit.
Dec 05 09:54:03 compute-0 systemd-logind[789]: Removed session 43.
Dec 05 09:54:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:03.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:04.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:04 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.387777) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928444388025, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1175, "num_deletes": 250, "total_data_size": 2286155, "memory_usage": 2330440, "flush_reason": "Manual Compaction"}
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928444407508, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1394368, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10970, "largest_seqno": 12144, "table_properties": {"data_size": 1389966, "index_size": 1926, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11249, "raw_average_key_size": 20, "raw_value_size": 1380460, "raw_average_value_size": 2469, "num_data_blocks": 85, "num_entries": 559, "num_filter_entries": 559, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928321, "oldest_key_time": 1764928321, "file_creation_time": 1764928444, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 19781 microseconds, and 9601 cpu microseconds.
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.407580) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1394368 bytes OK
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.407602) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.409380) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.409393) EVENT_LOG_v1 {"time_micros": 1764928444409390, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.409410) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2280872, prev total WAL file size 2280872, number of live WAL files 2.
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.410208) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1361KB)], [26(13MB)]
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928444410379, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 15879648, "oldest_snapshot_seqno": -1}
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4258 keys, 13608082 bytes, temperature: kUnknown
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928444552755, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 13608082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13575746, "index_size": 20578, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 108802, "raw_average_key_size": 25, "raw_value_size": 13494010, "raw_average_value_size": 3169, "num_data_blocks": 883, "num_entries": 4258, "num_filter_entries": 4258, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764928444, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:54:04 compute-0 ceph-mon[74418]: pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.553296) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 13608082 bytes
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.573448) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.4 rd, 95.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 13.8 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(21.1) write-amplify(9.8) OK, records in: 4724, records dropped: 466 output_compression: NoCompression
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.573475) EVENT_LOG_v1 {"time_micros": 1764928444573463, "job": 10, "event": "compaction_finished", "compaction_time_micros": 142593, "compaction_time_cpu_micros": 34366, "output_level": 6, "num_output_files": 1, "total_output_size": 13608082, "num_input_records": 4724, "num_output_records": 4258, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928444574065, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928444577185, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.410053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.577277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.577281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.577284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.577286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:54:04 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:54:04.577288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:54:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:04 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:05 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:05.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:05 compute-0 ceph-mon[74418]: pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:05] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:54:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:05] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:54:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:06.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:06 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:06 compute-0 sudo[124041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:54:06 compute-0 sudo[124041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:06 compute-0 sudo[124041]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:06 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:06.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:54:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:07 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:07.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:08.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:08 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:08 compute-0 ceph-mon[74418]: pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:08.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:54:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:08 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:54:09 compute-0 sshd-session[124068]: Accepted publickey for zuul from 192.168.122.30 port 50016 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:54:09 compute-0 systemd-logind[789]: New session 44 of user zuul.
Dec 05 09:54:09 compute-0 systemd[1]: Started Session 44 of User zuul.
Dec 05 09:54:09 compute-0 sshd-session[124068]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:54:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:09 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f4004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:09.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:10.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:10 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:10 compute-0 ceph-mon[74418]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:54:10 compute-0 python3.9[124222]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:54:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:10 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:11 compute-0 sudo[124378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joceheedkyyugalwgeraithpdgdngdta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928450.8433213-59-215244528758994/AnsiballZ_file.py'
Dec 05 09:54:11 compute-0 sudo[124378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:11 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:11.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:11 compute-0 python3.9[124380]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:11 compute-0 sudo[124378]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:12.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:12 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:12 compute-0 sudo[124555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxtupjawukcgigwwsiygitkjohpczhto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928451.7661815-83-93829090181691/AnsiballZ_stat.py'
Dec 05 09:54:12 compute-0 sudo[124555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:12 compute-0 ceph-mon[74418]: pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:12 compute-0 python3.9[124557]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:12 compute-0 sudo[124555]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:54:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:54:12 compute-0 sudo[124633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtnqkrgfahxjvvryuhhcnasdnvaxelal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928451.7661815-83-93829090181691/AnsiballZ_file.py'
Dec 05 09:54:12 compute-0 sudo[124633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:12 compute-0 python3.9[124635]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.y59urake recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:12 compute-0 sudo[124633]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:12 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:54:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:13 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:13.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:13 compute-0 sudo[124785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyfotjcaxjcgicqoxnddqczuvasilllz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928453.6957378-143-99149634441285/AnsiballZ_stat.py'
Dec 05 09:54:13 compute-0 sudo[124785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:14 compute-0 python3.9[124787]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:14.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:14 compute-0 sudo[124785]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:14 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:14 compute-0 sudo[124865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdvedpjuegcynsvegcyyszzjzmbfdiyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928453.6957378-143-99149634441285/AnsiballZ_file.py'
Dec 05 09:54:14 compute-0 sudo[124865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:14 compute-0 ceph-mon[74418]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:14 compute-0 python3.9[124867]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.dhsy_kpa recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:14 compute-0 sudo[124865]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:14 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:15 compute-0 sudo[125017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukfncxeshmdymtrwgsncgnrtvixnerqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928454.942234-182-5583852776263/AnsiballZ_file.py'
Dec 05 09:54:15 compute-0 sudo[125017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:15 compute-0 python3.9[125019]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:54:15 compute-0 sudo[125017]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:15 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:15.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:15] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:54:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:15] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:54:15 compute-0 sudo[125169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tixgeywrpcrbvnzhkvhvapgikimklsdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928455.6760652-206-216150701821740/AnsiballZ_stat.py'
Dec 05 09:54:15 compute-0 sudo[125169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:16 compute-0 python3.9[125171]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:16 compute-0 sudo[125169]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:16.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:16 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:16 compute-0 sudo[125249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twwhmphzyqbwjrtxfaoqddlxragmsude ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928455.6760652-206-216150701821740/AnsiballZ_file.py'
Dec 05 09:54:16 compute-0 sudo[125249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:16 compute-0 ceph-mon[74418]: pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:16 compute-0 python3.9[125251]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:54:16 compute-0 sudo[125249]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:16 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f80040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:16.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:54:16 compute-0 sudo[125401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lntmocaxdzufdmaivuryueykwmxzfdno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928456.7141747-206-60785241671238/AnsiballZ_stat.py'
Dec 05 09:54:16 compute-0 sudo[125401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:17 compute-0 python3.9[125403]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:17 compute-0 sudo[125401]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:17 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604001dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec 05 09:54:17 compute-0 sudo[125479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vukkvzbdfbjpdsjekcqiuiqmzejqenqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928456.7141747-206-60785241671238/AnsiballZ_file.py'
Dec 05 09:54:17 compute-0 sudo[125479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:17.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:17 compute-0 python3.9[125481]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:54:17 compute-0 sudo[125479]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:18.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:18 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:18 compute-0 ceph-mon[74418]: pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:18 compute-0 sudo[125633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwsgwknbrukfxdbhzgzodgpxvzgtdceu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928458.4832182-275-276159552920458/AnsiballZ_file.py'
Dec 05 09:54:18 compute-0 sudo[125633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:18.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:54:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:18.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:54:18 compute-0 python3.9[125635]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:18 compute-0 sudo[125633]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:18 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:54:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:19 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f80040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:19 compute-0 sudo[125785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcssgvimmlqnyktztmvsacampduxsipd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928459.190399-299-252428044113306/AnsiballZ_stat.py'
Dec 05 09:54:19 compute-0 sudo[125785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:19.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:19 compute-0 python3.9[125787]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:19 compute-0 sudo[125785]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:19 compute-0 ceph-mon[74418]: pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:54:19 compute-0 sudo[125863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxdgbuzmhszxcqmdygsrrhnaqzfrsklq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928459.190399-299-252428044113306/AnsiballZ_file.py'
Dec 05 09:54:19 compute-0 sudo[125863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:20.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:20 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604001dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:20 compute-0 python3.9[125865]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:20 compute-0 sudo[125863]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:20 compute-0 sudo[126017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msyaefqfdlugjzkrfylwteokeusipwhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928460.6191857-335-47345989082921/AnsiballZ_stat.py'
Dec 05 09:54:20 compute-0 sudo[126017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:20 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:21 compute-0 python3.9[126019]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:21 compute-0 sudo[126017]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:21 compute-0 sudo[126095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbsphjqjeyfoatotpdmjpnsxhkuomugg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928460.6191857-335-47345989082921/AnsiballZ_file.py'
Dec 05 09:54:21 compute-0 sudo[126095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:21 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604001dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 09:54:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:21.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 09:54:21 compute-0 python3.9[126097]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:21 compute-0 sudo[126095]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:22.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:22 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:22 compute-0 ceph-mon[74418]: pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:22 compute-0 sudo[126249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tokrobuhbpsdfogqtkunhhwgmstopqvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928461.891141-371-105293861642220/AnsiballZ_systemd.py'
Dec 05 09:54:22 compute-0 sudo[126249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:22 compute-0 python3.9[126251]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:54:22 compute-0 systemd[1]: Reloading.
Dec 05 09:54:22 compute-0 systemd-rc-local-generator[126277]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:54:22 compute-0 systemd-sysv-generator[126280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:54:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:22 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f80040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:23 compute-0 sudo[126249]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:23 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 09:54:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:23.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 09:54:23 compute-0 sudo[126437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvgzvxfjtsnkzxyghvhxngmnowvuzvnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928463.3992643-395-194242682968623/AnsiballZ_stat.py'
Dec 05 09:54:23 compute-0 sudo[126437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:23 compute-0 ceph-mon[74418]: pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:23 compute-0 python3.9[126439]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:23 compute-0 sudo[126437]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:24 compute-0 sudo[126516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhmizsqbsgntkkafmoguvszjaivwfowm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928463.3992643-395-194242682968623/AnsiballZ_file.py'
Dec 05 09:54:24 compute-0 sudo[126516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:24.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:24 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604001dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:24 compute-0 python3.9[126518]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:24 compute-0 sudo[126516]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:24 compute-0 sudo[126669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uctxufovnubaleswoiumifujcwrxzqte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928464.66515-431-177574551451360/AnsiballZ_stat.py'
Dec 05 09:54:24 compute-0 sudo[126669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:24 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:25 compute-0 python3.9[126671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:25 compute-0 sudo[126669]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:25 compute-0 sudo[126747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijpkgvnozjjktmcnrpiskkmzwyfyslmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928464.66515-431-177574551451360/AnsiballZ_file.py'
Dec 05 09:54:25 compute-0 sudo[126747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:25 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f80040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:25.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:25 compute-0 python3.9[126749]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:25 compute-0 sudo[126747]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:25] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:54:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:25] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:54:26 compute-0 ceph-mon[74418]: pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 09:54:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:26.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 09:54:26 compute-0 sudo[126900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heydnkuybcvjxmndnhjzkbefpwzjxgsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928465.9450204-467-47970891871346/AnsiballZ_systemd.py'
Dec 05 09:54:26 compute-0 sudo[126900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:26 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:26 compute-0 sudo[126904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:54:26 compute-0 sudo[126904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:26 compute-0 sudo[126904]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:26 compute-0 python3.9[126903]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:54:26 compute-0 systemd[1]: Reloading.
Dec 05 09:54:26 compute-0 systemd-rc-local-generator[126953]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:54:26 compute-0 systemd-sysv-generator[126958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:54:26 compute-0 systemd[1]: Starting Create netns directory...
Dec 05 09:54:26 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 09:54:26 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 09:54:26 compute-0 systemd[1]: Finished Create netns directory.
Dec 05 09:54:26 compute-0 sudo[126900]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:26.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:54:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:26.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:54:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:26 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604003640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:27 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:54:27
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.mgr', '.nfs', 'vms']
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:54:27 compute-0 sshd-session[71303]: Received disconnect from 38.129.56.31 port 58660:11: disconnected by user
Dec 05 09:54:27 compute-0 sshd-session[71303]: Disconnected from user zuul 38.129.56.31 port 58660
Dec 05 09:54:27 compute-0 sshd-session[71300]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:54:27 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 05 09:54:27 compute-0 systemd[1]: session-18.scope: Consumed 1min 41.004s CPU time.
Dec 05 09:54:27 compute-0 systemd-logind[789]: Session 18 logged out. Waiting for processes to exit.
Dec 05 09:54:27 compute-0 systemd-logind[789]: Removed session 18.
Dec 05 09:54:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:27.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:54:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:54:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:54:27 compute-0 python3.9[127118]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:54:27 compute-0 network[127135]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:54:27 compute-0 network[127136]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:54:27 compute-0 network[127137]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:54:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:28.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:28 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f80040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:28 compute-0 ceph-mon[74418]: pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:54:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:28.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:54:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:28.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:54:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:28.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:54:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:28 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:54:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:29 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604003640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:29.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:29 compute-0 ceph-mon[74418]: pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:54:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:30.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:30 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:30 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5ec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:31 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f80040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:31.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:31 compute-0 ceph-mon[74418]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:32 compute-0 sudo[127403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjsypujhhqdmlruumqulbqroifslgkbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928471.7220984-545-278719898598100/AnsiballZ_stat.py'
Dec 05 09:54:32 compute-0 sudo[127403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:32.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:32 compute-0 python3.9[127405]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:32 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610002af0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:32 compute-0 sudo[127403]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:32 compute-0 sudo[127483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukyxjivkolrvuuvufkbddekbdjurvigg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928471.7220984-545-278719898598100/AnsiballZ_file.py'
Dec 05 09:54:32 compute-0 sudo[127483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:32 compute-0 python3.9[127485]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:32 compute-0 sudo[127483]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:32 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604003640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:33 compute-0 sudo[127635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrnkkqesckbbermlnsywqfrhvvpfnpmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928472.9879858-584-251794889215350/AnsiballZ_file.py'
Dec 05 09:54:33 compute-0 sudo[127635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:33 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:33 compute-0 python3.9[127637]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:33 compute-0 sudo[127635]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:34 compute-0 sudo[127788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igibpgrrvkkgpwmltrfegmvxzsydzbko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928473.7356946-608-142383924309736/AnsiballZ_stat.py'
Dec 05 09:54:34 compute-0 sudo[127788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:34.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:34 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f80040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:34 compute-0 ceph-mon[74418]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:34 compute-0 python3.9[127790]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:34 compute-0 sudo[127788]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:34 compute-0 sudo[127867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idiyceyxafpvmcigdmabbpqwunnjxcws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928473.7356946-608-142383924309736/AnsiballZ_file.py'
Dec 05 09:54:34 compute-0 sudo[127867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:34 compute-0 python3.9[127869]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:34 compute-0 sudo[127867]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:34 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610002af0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:35 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:35.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:35] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:54:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:35] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:54:35 compute-0 sudo[128019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzevunkphemuukbxjoedjqsiccxlsobq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928475.403119-653-146003132898309/AnsiballZ_timezone.py'
Dec 05 09:54:35 compute-0 sudo[128019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:36 compute-0 python3.9[128021]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 05 09:54:36 compute-0 systemd[1]: Starting Time & Date Service...
Dec 05 09:54:36 compute-0 systemd[1]: Started Time & Date Service.
Dec 05 09:54:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:36.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:36 compute-0 sudo[128019]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:36 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:36 compute-0 ceph-mon[74418]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:36 compute-0 sudo[128177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fefpenhwkelzijzwoeljmiavveyhvljj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928476.6082213-680-54922664815982/AnsiballZ_file.py'
Dec 05 09:54:36 compute-0 sudo[128177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:36.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:54:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:36 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f80040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:37 compute-0 python3.9[128179]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:37 compute-0 sudo[128177]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:37 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610002af0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:37 compute-0 sudo[128329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmmgnniaufeqxjsyflrozrjiwctarjol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928477.3815114-704-46312744338773/AnsiballZ_stat.py'
Dec 05 09:54:37 compute-0 sudo[128329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:37 compute-0 python3.9[128331]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:37 compute-0 sudo[128329]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:38 compute-0 sudo[128408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mylgqudvpudbyjbepojbtehwylmzlezh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928477.3815114-704-46312744338773/AnsiballZ_file.py'
Dec 05 09:54:38 compute-0 sudo[128408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:38.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:38 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:38 compute-0 ceph-mon[74418]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:54:38 compute-0 python3.9[128410]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:38 compute-0 sudo[128408]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:38.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:54:38 compute-0 sudo[128561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxezrlmwvymzuifppdwmkwxferdvzuum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928478.6519825-740-225159126947210/AnsiballZ_stat.py'
Dec 05 09:54:38 compute-0 sudo[128561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:38 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc604003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:39 compute-0 python3.9[128563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:39 compute-0 sudo[128561]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 09:54:39 compute-0 sudo[128639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjiqerjzszntzwcipthoaogtpuafwmmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928478.6519825-740-225159126947210/AnsiballZ_file.py'
Dec 05 09:54:39 compute-0 sudo[128639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095439 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:54:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:39 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f80040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 09:54:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 09:54:39 compute-0 python3.9[128641]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.7kwdrkkv recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:39 compute-0 sudo[128639]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:40 compute-0 sudo[128792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhfnqrlsfbbnnuvjhjfkqejgerjrwxno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928479.9261882-776-199101218365070/AnsiballZ_stat.py'
Dec 05 09:54:40 compute-0 sudo[128792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:40.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:40 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610002af0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:40 compute-0 ceph-mon[74418]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 09:54:40 compute-0 python3.9[128794]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:40 compute-0 sudo[128792]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:40 compute-0 sudo[128871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icekffddjslwwxjkycnclguddfkqedqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928479.9261882-776-199101218365070/AnsiballZ_file.py'
Dec 05 09:54:40 compute-0 sudo[128871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:40 compute-0 python3.9[128873]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:40 compute-0 sudo[128871]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:40 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610002af0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:54:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:41 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc610002af0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:54:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:41.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:41 compute-0 sudo[129023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppohimdtdekbyqgsxiwsioyuutlilzqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928481.268658-815-57979406745851/AnsiballZ_command.py'
Dec 05 09:54:41 compute-0 sudo[129023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:41 compute-0 ceph-mon[74418]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:54:41 compute-0 python3.9[129025]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:54:41 compute-0 sudo[129023]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:42.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:42 compute-0 kernel: ganesha.nfsd[119843]: segfault at 50 ip 00007fc6cabed32e sp 00007fc697ffe210 error 4 in libntirpc.so.5.8[7fc6cabd2000+2c000] likely on CPU 7 (core 0, socket 7)
Dec 05 09:54:42 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 05 09:54:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[117684]: 05/12/2025 09:54:42 : epoch 6932ab8b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8004100 fd 39 proxy ignored for local
Dec 05 09:54:42 compute-0 systemd[1]: Started Process Core Dump (PID 129105/UID 0).
Dec 05 09:54:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:54:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:54:42 compute-0 sudo[129180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybxdgjjvwazwyczduyizivpjlllzbufp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764928482.176034-839-191561127168923/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 09:54:42 compute-0 sudo[129180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:54:42 compute-0 python3[129182]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 09:54:42 compute-0 sudo[129180]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:54:43 compute-0 sudo[129332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkiaormnnajcpelgdxxcxspkiwjifkyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928483.0969238-863-145209188847631/AnsiballZ_stat.py'
Dec 05 09:54:43 compute-0 sudo[129332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 09:54:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 09:54:43 compute-0 python3.9[129334]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:43 compute-0 sudo[129332]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:43 compute-0 systemd-coredump[129106]: Process 118030 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 45:
                                                    #0  0x00007fc6cabed32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 05 09:54:43 compute-0 systemd[1]: systemd-coredump@2-129105-0.service: Deactivated successfully.
Dec 05 09:54:43 compute-0 systemd[1]: systemd-coredump@2-129105-0.service: Consumed 1.397s CPU time.
Dec 05 09:54:43 compute-0 sudo[129420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkkrahvqvtdukchmscgwcdktzaoerbou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928483.0969238-863-145209188847631/AnsiballZ_file.py'
Dec 05 09:54:43 compute-0 sudo[129420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:43 compute-0 ceph-mon[74418]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:54:43 compute-0 podman[129401]: 2025-12-05 09:54:43.841988322 +0000 UTC m=+0.038773585 container died 972a80b3db4ec87a0de4970a0c44d162b66da273f0d7ab070e3c06feb681d206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 09:54:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c16d39076c205eb771d8e7457ee83dc0aa0ee695ce4e2f4047f9aa94182e506-merged.mount: Deactivated successfully.
Dec 05 09:54:43 compute-0 podman[129401]: 2025-12-05 09:54:43.891041881 +0000 UTC m=+0.087827104 container remove 972a80b3db4ec87a0de4970a0c44d162b66da273f0d7ab070e3c06feb681d206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:54:43 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Main process exited, code=exited, status=139/n/a
Dec 05 09:54:44 compute-0 python3.9[129427]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:44 compute-0 sudo[129420]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:44 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Failed with result 'exit-code'.
Dec 05 09:54:44 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 1.831s CPU time.
Dec 05 09:54:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:44.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:44 compute-0 sudo[129495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:54:44 compute-0 sudo[129495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:44 compute-0 sudo[129495]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:44 compute-0 sudo[129548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 05 09:54:44 compute-0 sudo[129548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:44 compute-0 sudo[129675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfdestpyphwntdnfhznqipqehguqxzmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928484.3272324-899-212638354025849/AnsiballZ_stat.py'
Dec 05 09:54:44 compute-0 sudo[129675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:44 compute-0 sudo[129548]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:54:44 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:54:44 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:44 compute-0 sudo[129684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:54:44 compute-0 sudo[129684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:44 compute-0 sudo[129684]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 09:54:44 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 09:54:44 compute-0 python3.9[129681]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:44 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:44 compute-0 sudo[129709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:54:44 compute-0 sudo[129709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:44 compute-0 sudo[129675]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:45 compute-0 sudo[129821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rghchesdbeobustjipkdyskyvhgbuywv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928484.3272324-899-212638354025849/AnsiballZ_file.py'
Dec 05 09:54:45 compute-0 sudo[129821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:54:45 compute-0 python3.9[129825]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:45 compute-0 sudo[129821]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:45 compute-0 sudo[129709]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:54:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:54:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:54:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:54:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 198 B/s rd, 0 op/s
Dec 05 09:54:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:54:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 09:54:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:45.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 09:54:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:54:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:54:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:54:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:54:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:54:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:54:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:54:45 compute-0 sudo[129867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:54:45 compute-0 sudo[129867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:45 compute-0 sudo[129867]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:45] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:54:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:45] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:54:45 compute-0 sudo[129915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:54:45 compute-0 sudo[129915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:45 compute-0 ceph-mon[74418]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:54:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:54:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:54:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:54:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:54:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:54:45 compute-0 sudo[130056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccdmoguhdhpuqcuaggyeaecmherpdrst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928485.621448-935-56060190866165/AnsiballZ_stat.py'
Dec 05 09:54:45 compute-0 sudo[130056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:46 compute-0 podman[130087]: 2025-12-05 09:54:46.06967874 +0000 UTC m=+0.041571522 container create d019d1c8a279a88034b3d247d6bea985ffdaa5523bf272f4b1ebd4313bc3d42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 09:54:46 compute-0 python3.9[130067]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:46 compute-0 systemd[1]: Started libpod-conmon-d019d1c8a279a88034b3d247d6bea985ffdaa5523bf272f4b1ebd4313bc3d42f.scope.
Dec 05 09:54:46 compute-0 sudo[130056]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:54:46 compute-0 podman[130087]: 2025-12-05 09:54:46.148489852 +0000 UTC m=+0.120382644 container init d019d1c8a279a88034b3d247d6bea985ffdaa5523bf272f4b1ebd4313bc3d42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:54:46 compute-0 podman[130087]: 2025-12-05 09:54:46.054296974 +0000 UTC m=+0.026189776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:54:46 compute-0 podman[130087]: 2025-12-05 09:54:46.156323409 +0000 UTC m=+0.128216201 container start d019d1c8a279a88034b3d247d6bea985ffdaa5523bf272f4b1ebd4313bc3d42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_galois, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 09:54:46 compute-0 podman[130087]: 2025-12-05 09:54:46.158899021 +0000 UTC m=+0.130791803 container attach d019d1c8a279a88034b3d247d6bea985ffdaa5523bf272f4b1ebd4313bc3d42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 09:54:46 compute-0 elastic_galois[130106]: 167 167
Dec 05 09:54:46 compute-0 systemd[1]: libpod-d019d1c8a279a88034b3d247d6bea985ffdaa5523bf272f4b1ebd4313bc3d42f.scope: Deactivated successfully.
Dec 05 09:54:46 compute-0 conmon[130106]: conmon d019d1c8a279a88034b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d019d1c8a279a88034b3d247d6bea985ffdaa5523bf272f4b1ebd4313bc3d42f.scope/container/memory.events
Dec 05 09:54:46 compute-0 podman[130087]: 2025-12-05 09:54:46.163433696 +0000 UTC m=+0.135326478 container died d019d1c8a279a88034b3d247d6bea985ffdaa5523bf272f4b1ebd4313bc3d42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_galois, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 09:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-aceeda8236cdc9297f93dc0530cf1657669c1ee62a725e937c1a07c5ce556ba6-merged.mount: Deactivated successfully.
Dec 05 09:54:46 compute-0 podman[130087]: 2025-12-05 09:54:46.198866127 +0000 UTC m=+0.170758919 container remove d019d1c8a279a88034b3d247d6bea985ffdaa5523bf272f4b1ebd4313bc3d42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:54:46 compute-0 systemd[1]: libpod-conmon-d019d1c8a279a88034b3d247d6bea985ffdaa5523bf272f4b1ebd4313bc3d42f.scope: Deactivated successfully.
Dec 05 09:54:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:46.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:46 compute-0 podman[130178]: 2025-12-05 09:54:46.33620002 +0000 UTC m=+0.042027175 container create d8efedbd8dfb4d8961878db31e7dc6d5d49362cf23d7e1c5f2654236db0b8be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:54:46 compute-0 systemd[1]: Started libpod-conmon-d8efedbd8dfb4d8961878db31e7dc6d5d49362cf23d7e1c5f2654236db0b8be9.scope.
Dec 05 09:54:46 compute-0 sudo[130216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pafjjdlptgdsrhakhnwryymbxjpbxals ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928485.621448-935-56060190866165/AnsiballZ_file.py'
Dec 05 09:54:46 compute-0 sudo[130216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41561164a88597da91f55e97751deff9928b184257bbfcf8d3233e52dd54c54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41561164a88597da91f55e97751deff9928b184257bbfcf8d3233e52dd54c54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41561164a88597da91f55e97751deff9928b184257bbfcf8d3233e52dd54c54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41561164a88597da91f55e97751deff9928b184257bbfcf8d3233e52dd54c54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41561164a88597da91f55e97751deff9928b184257bbfcf8d3233e52dd54c54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:46 compute-0 podman[130178]: 2025-12-05 09:54:46.318304585 +0000 UTC m=+0.024131810 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:54:46 compute-0 sudo[130223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:54:46 compute-0 podman[130178]: 2025-12-05 09:54:46.470947532 +0000 UTC m=+0.176774707 container init d8efedbd8dfb4d8961878db31e7dc6d5d49362cf23d7e1c5f2654236db0b8be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 09:54:46 compute-0 sudo[130223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:46 compute-0 sudo[130223]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:46 compute-0 podman[130178]: 2025-12-05 09:54:46.485107384 +0000 UTC m=+0.190934539 container start d8efedbd8dfb4d8961878db31e7dc6d5d49362cf23d7e1c5f2654236db0b8be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 05 09:54:46 compute-0 podman[130178]: 2025-12-05 09:54:46.488451987 +0000 UTC m=+0.194279222 container attach d8efedbd8dfb4d8961878db31e7dc6d5d49362cf23d7e1c5f2654236db0b8be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:54:46 compute-0 python3.9[130231]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:46 compute-0 sudo[130216]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:46 compute-0 angry_sutherland[130227]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:54:46 compute-0 angry_sutherland[130227]: --> All data devices are unavailable
Dec 05 09:54:46 compute-0 systemd[1]: libpod-d8efedbd8dfb4d8961878db31e7dc6d5d49362cf23d7e1c5f2654236db0b8be9.scope: Deactivated successfully.
Dec 05 09:54:46 compute-0 podman[130178]: 2025-12-05 09:54:46.825323825 +0000 UTC m=+0.531151000 container died d8efedbd8dfb4d8961878db31e7dc6d5d49362cf23d7e1c5f2654236db0b8be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:54:46 compute-0 ceph-mon[74418]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 198 B/s rd, 0 op/s
Dec 05 09:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e41561164a88597da91f55e97751deff9928b184257bbfcf8d3233e52dd54c54-merged.mount: Deactivated successfully.
Dec 05 09:54:46 compute-0 podman[130178]: 2025-12-05 09:54:46.868352976 +0000 UTC m=+0.574180131 container remove d8efedbd8dfb4d8961878db31e7dc6d5d49362cf23d7e1c5f2654236db0b8be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 09:54:46 compute-0 systemd[1]: libpod-conmon-d8efedbd8dfb4d8961878db31e7dc6d5d49362cf23d7e1c5f2654236db0b8be9.scope: Deactivated successfully.
Dec 05 09:54:46 compute-0 sudo[129915]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:46 compute-0 sudo[130358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:54:46 compute-0 sudo[130358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:46 compute-0 sudo[130358]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:46.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:54:47 compute-0 sudo[130406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:54:47 compute-0 sudo[130406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:47 compute-0 sudo[130473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybudxovvzrchbxsehkksoulxksrjcjzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928486.7968466-971-218801223094849/AnsiballZ_stat.py'
Dec 05 09:54:47 compute-0 sudo[130473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:47 compute-0 python3.9[130475]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:47 compute-0 sudo[130473]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 198 B/s rd, 0 op/s
Dec 05 09:54:47 compute-0 podman[130541]: 2025-12-05 09:54:47.534704118 +0000 UTC m=+0.040948454 container create c99ef9a95e1b6b17c4ebeac43a2d5d2a87c808b51c3fa2b90817067837c5ecfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 05 09:54:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:47.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:47 compute-0 systemd[1]: Started libpod-conmon-c99ef9a95e1b6b17c4ebeac43a2d5d2a87c808b51c3fa2b90817067837c5ecfd.scope.
Dec 05 09:54:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:54:47 compute-0 sudo[130610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vahkzpncxnwpvdsggpzscnodvzulnapf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928486.7968466-971-218801223094849/AnsiballZ_file.py'
Dec 05 09:54:47 compute-0 sudo[130610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:47 compute-0 podman[130541]: 2025-12-05 09:54:47.516394582 +0000 UTC m=+0.022638958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:54:47 compute-0 podman[130541]: 2025-12-05 09:54:47.611548197 +0000 UTC m=+0.117792563 container init c99ef9a95e1b6b17c4ebeac43a2d5d2a87c808b51c3fa2b90817067837c5ecfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:54:47 compute-0 podman[130541]: 2025-12-05 09:54:47.619931999 +0000 UTC m=+0.126176355 container start c99ef9a95e1b6b17c4ebeac43a2d5d2a87c808b51c3fa2b90817067837c5ecfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:54:47 compute-0 nervous_lamport[130604]: 167 167
Dec 05 09:54:47 compute-0 podman[130541]: 2025-12-05 09:54:47.624132845 +0000 UTC m=+0.130377241 container attach c99ef9a95e1b6b17c4ebeac43a2d5d2a87c808b51c3fa2b90817067837c5ecfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:54:47 compute-0 systemd[1]: libpod-c99ef9a95e1b6b17c4ebeac43a2d5d2a87c808b51c3fa2b90817067837c5ecfd.scope: Deactivated successfully.
Dec 05 09:54:47 compute-0 podman[130541]: 2025-12-05 09:54:47.624953778 +0000 UTC m=+0.131198124 container died c99ef9a95e1b6b17c4ebeac43a2d5d2a87c808b51c3fa2b90817067837c5ecfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 09:54:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-064be9e9b0577f8c2bc9273f11dc98dce2bc33146e83f97676670c8846518137-merged.mount: Deactivated successfully.
Dec 05 09:54:47 compute-0 podman[130541]: 2025-12-05 09:54:47.673097971 +0000 UTC m=+0.179342327 container remove c99ef9a95e1b6b17c4ebeac43a2d5d2a87c808b51c3fa2b90817067837c5ecfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:54:47 compute-0 systemd[1]: libpod-conmon-c99ef9a95e1b6b17c4ebeac43a2d5d2a87c808b51c3fa2b90817067837c5ecfd.scope: Deactivated successfully.
Dec 05 09:54:47 compute-0 python3.9[130612]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:47 compute-0 sudo[130610]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:47 compute-0 podman[130633]: 2025-12-05 09:54:47.880356291 +0000 UTC m=+0.044542115 container create 045fac971fdc969816bbbf321b01ec117f6e8bcfd70061e173e0379eac4c87ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_blackwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 09:54:47 compute-0 systemd[1]: Started libpod-conmon-045fac971fdc969816bbbf321b01ec117f6e8bcfd70061e173e0379eac4c87ca.scope.
Dec 05 09:54:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:54:47 compute-0 podman[130633]: 2025-12-05 09:54:47.864985454 +0000 UTC m=+0.029171298 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:54:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af230b4aa97831a78371c7a0f6b2d8ab2d39024bfd112004e1295229c905cce8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af230b4aa97831a78371c7a0f6b2d8ab2d39024bfd112004e1295229c905cce8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af230b4aa97831a78371c7a0f6b2d8ab2d39024bfd112004e1295229c905cce8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af230b4aa97831a78371c7a0f6b2d8ab2d39024bfd112004e1295229c905cce8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:47 compute-0 podman[130633]: 2025-12-05 09:54:47.970756714 +0000 UTC m=+0.134942538 container init 045fac971fdc969816bbbf321b01ec117f6e8bcfd70061e173e0379eac4c87ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 09:54:47 compute-0 podman[130633]: 2025-12-05 09:54:47.980759251 +0000 UTC m=+0.144945085 container start 045fac971fdc969816bbbf321b01ec117f6e8bcfd70061e173e0379eac4c87ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_blackwell, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:54:47 compute-0 podman[130633]: 2025-12-05 09:54:47.984432322 +0000 UTC m=+0.148618146 container attach 045fac971fdc969816bbbf321b01ec117f6e8bcfd70061e173e0379eac4c87ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_blackwell, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:54:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:48.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095448 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]: {
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:     "1": [
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:         {
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             "devices": [
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "/dev/loop3"
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             ],
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             "lv_name": "ceph_lv0",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             "lv_size": "21470642176",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             "name": "ceph_lv0",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             "tags": {
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.cluster_name": "ceph",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.crush_device_class": "",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.encrypted": "0",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.osd_id": "1",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.type": "block",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.vdo": "0",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:                 "ceph.with_tpm": "0"
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             },
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             "type": "block",
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:             "vg_name": "ceph_vg0"
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:         }
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]:     ]
Dec 05 09:54:48 compute-0 laughing_blackwell[130673]: }
Dec 05 09:54:48 compute-0 systemd[1]: libpod-045fac971fdc969816bbbf321b01ec117f6e8bcfd70061e173e0379eac4c87ca.scope: Deactivated successfully.
Dec 05 09:54:48 compute-0 podman[130633]: 2025-12-05 09:54:48.300126535 +0000 UTC m=+0.464312369 container died 045fac971fdc969816bbbf321b01ec117f6e8bcfd70061e173e0379eac4c87ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_blackwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:54:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-af230b4aa97831a78371c7a0f6b2d8ab2d39024bfd112004e1295229c905cce8-merged.mount: Deactivated successfully.
Dec 05 09:54:48 compute-0 podman[130633]: 2025-12-05 09:54:48.351036494 +0000 UTC m=+0.515222318 container remove 045fac971fdc969816bbbf321b01ec117f6e8bcfd70061e173e0379eac4c87ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 09:54:48 compute-0 systemd[1]: libpod-conmon-045fac971fdc969816bbbf321b01ec117f6e8bcfd70061e173e0379eac4c87ca.scope: Deactivated successfully.
Dec 05 09:54:48 compute-0 sudo[130821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndemnqdpvtokiptptocpsnyghuoiezyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928488.0610092-1007-75850071480914/AnsiballZ_stat.py'
Dec 05 09:54:48 compute-0 sudo[130821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:48 compute-0 sudo[130406]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:48 compute-0 sudo[130824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:54:48 compute-0 sudo[130824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:48 compute-0 sudo[130824]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:48 compute-0 sudo[130849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:54:48 compute-0 sudo[130849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:48 compute-0 python3.9[130823]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:54:48 compute-0 sudo[130821]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:48.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:54:48 compute-0 ceph-mon[74418]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 198 B/s rd, 0 op/s
Dec 05 09:54:48 compute-0 sudo[130990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miwsumaryxyqesprzoyeoaxjgmraqamd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928488.0610092-1007-75850071480914/AnsiballZ_file.py'
Dec 05 09:54:48 compute-0 sudo[130990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:48 compute-0 podman[130991]: 2025-12-05 09:54:48.954454943 +0000 UTC m=+0.046525148 container create f8a8ece20a970463c280dababde8234bba291bfd8151af93644b4a96bb0ca93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:54:49 compute-0 systemd[1]: Started libpod-conmon-f8a8ece20a970463c280dababde8234bba291bfd8151af93644b4a96bb0ca93b.scope.
Dec 05 09:54:49 compute-0 podman[130991]: 2025-12-05 09:54:48.935000575 +0000 UTC m=+0.027070800 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:54:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:54:49 compute-0 podman[130991]: 2025-12-05 09:54:49.060150411 +0000 UTC m=+0.152220636 container init f8a8ece20a970463c280dababde8234bba291bfd8151af93644b4a96bb0ca93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 09:54:49 compute-0 podman[130991]: 2025-12-05 09:54:49.071723841 +0000 UTC m=+0.163794046 container start f8a8ece20a970463c280dababde8234bba291bfd8151af93644b4a96bb0ca93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 09:54:49 compute-0 podman[130991]: 2025-12-05 09:54:49.075921977 +0000 UTC m=+0.167992202 container attach f8a8ece20a970463c280dababde8234bba291bfd8151af93644b4a96bb0ca93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_germain, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:54:49 compute-0 xenodochial_germain[131009]: 167 167
Dec 05 09:54:49 compute-0 systemd[1]: libpod-f8a8ece20a970463c280dababde8234bba291bfd8151af93644b4a96bb0ca93b.scope: Deactivated successfully.
Dec 05 09:54:49 compute-0 conmon[131009]: conmon f8a8ece20a970463c280 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8a8ece20a970463c280dababde8234bba291bfd8151af93644b4a96bb0ca93b.scope/container/memory.events
Dec 05 09:54:49 compute-0 podman[130991]: 2025-12-05 09:54:49.082791687 +0000 UTC m=+0.174861892 container died f8a8ece20a970463c280dababde8234bba291bfd8151af93644b4a96bb0ca93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:54:49 compute-0 python3.9[130994]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-04340093e250e3d99b2f60d411fe4673f13b858a2663eaffd5e4f7102cbae347-merged.mount: Deactivated successfully.
Dec 05 09:54:49 compute-0 sudo[130990]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:49 compute-0 podman[130991]: 2025-12-05 09:54:49.14136923 +0000 UTC m=+0.233439445 container remove f8a8ece20a970463c280dababde8234bba291bfd8151af93644b4a96bb0ca93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_germain, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:54:49 compute-0 systemd[1]: libpod-conmon-f8a8ece20a970463c280dababde8234bba291bfd8151af93644b4a96bb0ca93b.scope: Deactivated successfully.
Dec 05 09:54:49 compute-0 podman[131059]: 2025-12-05 09:54:49.355763316 +0000 UTC m=+0.071872551 container create 1c280b002eb0072c796e8995869e282fe3a4bc8bfbea3a47dd11f790a54194c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 05 09:54:49 compute-0 systemd[1]: Started libpod-conmon-1c280b002eb0072c796e8995869e282fe3a4bc8bfbea3a47dd11f790a54194c2.scope.
Dec 05 09:54:49 compute-0 podman[131059]: 2025-12-05 09:54:49.336871784 +0000 UTC m=+0.052981049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:54:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:54:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a4010ae7cd01417d3f7596ea1806345dd0d8f56091bbcd3e149d5fd67b7d48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a4010ae7cd01417d3f7596ea1806345dd0d8f56091bbcd3e149d5fd67b7d48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a4010ae7cd01417d3f7596ea1806345dd0d8f56091bbcd3e149d5fd67b7d48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a4010ae7cd01417d3f7596ea1806345dd0d8f56091bbcd3e149d5fd67b7d48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:49 compute-0 podman[131059]: 2025-12-05 09:54:49.45376878 +0000 UTC m=+0.169878045 container init 1c280b002eb0072c796e8995869e282fe3a4bc8bfbea3a47dd11f790a54194c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:54:49 compute-0 podman[131059]: 2025-12-05 09:54:49.463718466 +0000 UTC m=+0.179827721 container start 1c280b002eb0072c796e8995869e282fe3a4bc8bfbea3a47dd11f790a54194c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 09:54:49 compute-0 podman[131059]: 2025-12-05 09:54:49.470684188 +0000 UTC m=+0.186793453 container attach 1c280b002eb0072c796e8995869e282fe3a4bc8bfbea3a47dd11f790a54194c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 09:54:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 496 B/s rd, 99 B/s wr, 0 op/s
Dec 05 09:54:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:49 compute-0 sudo[131216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjincscwafvexdbmoqgbnfcxstjmnrie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928489.468922-1046-209023730842104/AnsiballZ_command.py'
Dec 05 09:54:49 compute-0 sudo[131216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:49 compute-0 python3.9[131220]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:54:49 compute-0 sudo[131216]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:50 compute-0 lvm[131305]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:54:50 compute-0 lvm[131305]: VG ceph_vg0 finished
Dec 05 09:54:50 compute-0 amazing_hoover[131075]: {}
Dec 05 09:54:50 compute-0 systemd[1]: libpod-1c280b002eb0072c796e8995869e282fe3a4bc8bfbea3a47dd11f790a54194c2.scope: Deactivated successfully.
Dec 05 09:54:50 compute-0 podman[131059]: 2025-12-05 09:54:50.222760535 +0000 UTC m=+0.938869780 container died 1c280b002eb0072c796e8995869e282fe3a4bc8bfbea3a47dd11f790a54194c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:54:50 compute-0 systemd[1]: libpod-1c280b002eb0072c796e8995869e282fe3a4bc8bfbea3a47dd11f790a54194c2.scope: Consumed 1.194s CPU time.
Dec 05 09:54:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:50.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-83a4010ae7cd01417d3f7596ea1806345dd0d8f56091bbcd3e149d5fd67b7d48-merged.mount: Deactivated successfully.
Dec 05 09:54:50 compute-0 podman[131059]: 2025-12-05 09:54:50.302721508 +0000 UTC m=+1.018830763 container remove 1c280b002eb0072c796e8995869e282fe3a4bc8bfbea3a47dd11f790a54194c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 09:54:50 compute-0 systemd[1]: libpod-conmon-1c280b002eb0072c796e8995869e282fe3a4bc8bfbea3a47dd11f790a54194c2.scope: Deactivated successfully.
Dec 05 09:54:50 compute-0 sudo[130849]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:54:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:54:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:50 compute-0 sudo[131363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:50 compute-0 sudo[131363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:54:50 compute-0 sudo[131363]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:50 compute-0 sudo[131469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsplhidqqmwmqsuistajkphzcejdpilc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928490.3630457-1070-121800056204972/AnsiballZ_blockinfile.py'
Dec 05 09:54:50 compute-0 sudo[131469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:51 compute-0 python3.9[131471]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:51 compute-0 sudo[131469]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:51 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:54:51 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:54:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 496 B/s rd, 99 B/s wr, 0 op/s
Dec 05 09:54:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 09:54:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:51.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 09:54:51 compute-0 sudo[131622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrlydhoxekzbacjsvggwsugaxynfjxpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928491.3508475-1097-32392534660183/AnsiballZ_file.py'
Dec 05 09:54:51 compute-0 sudo[131622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:51 compute-0 ceph-mon[74418]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 496 B/s rd, 99 B/s wr, 0 op/s
Dec 05 09:54:51 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:51 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:54:51 compute-0 python3.9[131624]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:51 compute-0 sudo[131622]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:52 compute-0 sudo[131776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbhvsnexsqhvfagvxmupsspmoitxdeaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928491.9817743-1097-109904300016814/AnsiballZ_file.py'
Dec 05 09:54:52 compute-0 sudo[131776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:52.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:52 compute-0 python3.9[131778]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:54:52 compute-0 sudo[131776]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:52 compute-0 ceph-mon[74418]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 496 B/s rd, 99 B/s wr, 0 op/s
Dec 05 09:54:53 compute-0 sudo[131928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hicdheozyexpgiazzftzsccieamketpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928492.853435-1142-191757887065337/AnsiballZ_mount.py'
Dec 05 09:54:53 compute-0 sudo[131928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 496 B/s wr, 2 op/s
Dec 05 09:54:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:53.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:53 compute-0 python3.9[131930]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 05 09:54:53 compute-0 sudo[131928]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 compute-0 sudo[132080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbozkbhgyqbtqwbevmlcfooazdaxhuno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928493.732616-1142-16529214954675/AnsiballZ_mount.py'
Dec 05 09:54:54 compute-0 sudo[132080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:54:54 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Scheduled restart job, restart counter is at 3.
Dec 05 09:54:54 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:54:54 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 1.831s CPU time.
Dec 05 09:54:54 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:54:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:54.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:54 compute-0 python3.9[132082]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 05 09:54:54 compute-0 sudo[132080]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 compute-0 podman[132133]: 2025-12-05 09:54:54.341646712 +0000 UTC m=+0.046844188 container create 3fcd774447c8fdb0b4cc5052b6e5ad4014232ea2916b20ac49bdfb3817240861 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 09:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815b348392f5bc9832b56e1eaab30ac28a2a7dbf1a7e06512b0553dbfdc38db1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815b348392f5bc9832b56e1eaab30ac28a2a7dbf1a7e06512b0553dbfdc38db1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815b348392f5bc9832b56e1eaab30ac28a2a7dbf1a7e06512b0553dbfdc38db1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815b348392f5bc9832b56e1eaab30ac28a2a7dbf1a7e06512b0553dbfdc38db1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hocvro-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:54 compute-0 podman[132133]: 2025-12-05 09:54:54.39935298 +0000 UTC m=+0.104550496 container init 3fcd774447c8fdb0b4cc5052b6e5ad4014232ea2916b20ac49bdfb3817240861 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 09:54:54 compute-0 podman[132133]: 2025-12-05 09:54:54.404813612 +0000 UTC m=+0.110011088 container start 3fcd774447c8fdb0b4cc5052b6e5ad4014232ea2916b20ac49bdfb3817240861 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:54:54 compute-0 bash[132133]: 3fcd774447c8fdb0b4cc5052b6e5ad4014232ea2916b20ac49bdfb3817240861
Dec 05 09:54:54 compute-0 podman[132133]: 2025-12-05 09:54:54.321584747 +0000 UTC m=+0.026782223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:54:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:54:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 05 09:54:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:54:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 05 09:54:54 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:54:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:54:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 05 09:54:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:54:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 05 09:54:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:54:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 05 09:54:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:54:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 05 09:54:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:54:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 05 09:54:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:54:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:54:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:54 compute-0 ceph-mon[74418]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 496 B/s wr, 2 op/s
Dec 05 09:54:54 compute-0 sshd-session[124071]: Connection closed by 192.168.122.30 port 50016
Dec 05 09:54:54 compute-0 sshd-session[124068]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:54:54 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Dec 05 09:54:54 compute-0 systemd[1]: session-44.scope: Consumed 29.936s CPU time.
Dec 05 09:54:54 compute-0 systemd-logind[789]: Session 44 logged out. Waiting for processes to exit.
Dec 05 09:54:54 compute-0 systemd-logind[789]: Removed session 44.
Dec 05 09:54:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 496 B/s wr, 2 op/s
Dec 05 09:54:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:55.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:55] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:54:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:54:55] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:54:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:56.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:54:56 compute-0 ceph-mon[74418]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 496 B/s wr, 2 op/s
Dec 05 09:54:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:56.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:54:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:54:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:54:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:54:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:54:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:57.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:54:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:54:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:54:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:54:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:54:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:54:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:54:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:54:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 09:54:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:54:58.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 09:54:58 compute-0 ceph-mon[74418]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:54:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:54:58.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:54:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:54:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec 05 09:54:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:54:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:54:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:54:59.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:00 compute-0 sshd-session[132219]: Accepted publickey for zuul from 192.168.122.30 port 52962 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:55:00 compute-0 systemd-logind[789]: New session 45 of user zuul.
Dec 05 09:55:00 compute-0 systemd[1]: Started Session 45 of User zuul.
Dec 05 09:55:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:00.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:00 compute-0 sshd-session[132219]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:55:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 05 09:55:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 05 09:55:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:55:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:55:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 09:55:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:55:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:55:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:55:00 compute-0 sudo[132373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfynjcvofoprelnyflkoyfqtrbskaxhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928500.3711562-18-278388847923910/AnsiballZ_tempfile.py'
Dec 05 09:55:00 compute-0 sudo[132373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:01 compute-0 python3.9[132375]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 05 09:55:01 compute-0 sudo[132373]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:01 compute-0 ceph-mon[74418]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec 05 09:55:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 09:55:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:01.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:01 compute-0 sudo[132525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bopouhtccqfmrecnjjpbigvbsmamewzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928501.3450267-54-69063824137676/AnsiballZ_stat.py'
Dec 05 09:55:01 compute-0 sudo[132525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:01 compute-0 python3.9[132527]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:55:02 compute-0 sudo[132525]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:02.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:02 compute-0 sudo[132681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaadyudrdidjcqogdipisxrcetbsgtmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928502.242481-78-200314196202625/AnsiballZ_slurp.py'
Dec 05 09:55:02 compute-0 sudo[132681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:02 compute-0 python3.9[132683]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 05 09:55:02 compute-0 sudo[132681]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:03 compute-0 ceph-mon[74418]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 09:55:03 compute-0 sudo[132833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trhcrfjiwproykaugyicyfpdqxiztcbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928503.0797997-102-238117275730517/AnsiballZ_stat.py'
Dec 05 09:55:03 compute-0 sudo[132833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095503 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:55:03 compute-0 python3.9[132835]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.9tq3p7i0 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:55:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Dec 05 09:55:03 compute-0 sudo[132833]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:03.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:04 compute-0 sudo[132958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exohjymjcqswowetvmhiskuczsazgbse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928503.0797997-102-238117275730517/AnsiballZ_copy.py'
Dec 05 09:55:04 compute-0 sudo[132958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:04 compute-0 python3.9[132960]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.9tq3p7i0 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764928503.0797997-102-238117275730517/.source.9tq3p7i0 _original_basename=.uepz16j_ follow=False checksum=0a5a98a0591b3fd1a6c822f29f896de305adceb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:04 compute-0 sudo[132958]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:04.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:05 compute-0 sudo[133112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndrabjsjamgmbwrzkoysyrgectpivuea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928504.4993691-147-106542767221962/AnsiballZ_setup.py'
Dec 05 09:55:05 compute-0 sudo[133112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:05 compute-0 ceph-mon[74418]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Dec 05 09:55:05 compute-0 python3.9[133114]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:55:05 compute-0 sudo[133112]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec 05 09:55:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:05.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:05] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:55:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:05] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 09:55:06 compute-0 sudo[133265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-javewppphykjntbuaslhygpbkjxijhat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928505.7262895-172-229663119002589/AnsiballZ_blockinfile.py'
Dec 05 09:55:06 compute-0 sudo[133265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:06 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 09:55:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:06.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:06 compute-0 python3.9[133267]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxQNlo8LJIA8xfbJGKSdMV98UOyu9sX4A5uTtBflRAOkH2wXRmdsECPkzI5G44w402q0xg3frbgD+BCh0dOaEjB53lSL9fiuoFoP2UMDOiBdr13eOasoBklzMszBfqWrVOks662bXDBzMQ61eXcXHiU5QWmKCS1HrupYfTHcabdj2EL/qsRRwL8Auc8eBHxl3VUFxB05r2Uu4Ls3Rt42dXItXqSr9ALeWVbYPQRh5O0Q8GItA45C+msxeJMBFgE8UcN3mm5qgcAxLZEViqYfKUEoXhxs57riJWdfojrm8a0UCNV9uLTW37s06Hg5QXXpwRm8AQqH4kXiSb/I+Dx8y9V568G3r2UAIy/DXBDgpu0+eVaNleKpcClTi/gUXjVedABom8PDw4ot8kdwBujvaB5J7Fmf9yi3XbdjQlMU0F+v8TTLmhUTMZbcSdlvH6ZEdJUp+cs6h/dep1Ia2NdljpuBse8DVa9vLu/amki3Qb08HvTtMHJVqHtKzSn+saAA8=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYyvL99EFBHDm2asxUS8r44IHbcLB7lwrOEDjFJjq8+
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCHaOVGjK9mhAo2eO+zVKbUHICCg0NK+AxIuZHw1DeeR3t1zLuA1LozuMzNRiZbW6GoVgw9PyUclcy8Qm1CEzNw=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqOVNDEMJj7JrxcivnSQxHLR3UNAyfEW275XwwR2jqDQwc5jHlKQN4yofccTvovJGu6nT/aKnvQp7UAX8rx+S+QqDLDzdFmyzd+Fu2yNA2xzyOG24YuBHbDF475BhB48C+7qV2QB5mwggpvanoqX1kcVbWglrlLBsuq6qRxpzz2gWprt9FGznLxXJI3JngVDt0Bbnug36yxZrL5c4l6eXNa2wuWzOG1uUcY73v39V1eLZrEfwrKGrHZuACNEl5UV8i4XepA5a/s8VDbe4o3fbA8ntB6z/oDq73X7wYyRME4HKlPcXoY57jsbPsYeg+Z3uYZ+8wCFJUwhfZjQFuUu6UEqkjMsl/DLQzI4OjWenkpnWdSxGIqYsvn0tlggV6eclljuuW2JyQde+uS8l0XAPU+6aT1VDWnUSr0w/mXWcVqjimgDfkY5UZk8z9YJTydk11MvySj4gz8WqEuTeUH4dGmJlis79zWcTDnldC0pDCsj9CZNBrcxuoePr7pKMh9MM=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA1JP08gBa69YpXGOxf1p0VMkT6sTUHT+UQjh3TGmf48
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOT71mYgQwbQpmRBwr+IUF/Vj3hJuhHTHm0L+1h7O5z+V6giaTp0V2h33GCQ7WbEntvKd2CSppF3vBCKuE1b+hI=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCCGYaoVdx41VJW+np05HpScB1pj66kWjXXdk8rueVCQEL0TF1cfRmP7kNWhq+oLpDC2qeyocH2is9CJbCeF3nMnz3pcXgfNJPsl77PpHuYx9+dJ/l7aiEp2MNYzamDc3S92PWUyMWNOlCRfuBrHSZBAvnB0xD3R99yvMHKc67cXzjVV4nSkUQqBv0MrH9HgFmhfG3gVFbdDXCRgFIG2h+ZF1DPnbsrsNdFK2ALSArtL+sfxDi+msM9PxuVl/C9PiKNRcHMUcrE3V3DjbRVO3nzVs9HZ1bJMyZodXLzB1JDhL1653n8Cud1gpE0PC7bhd3UIlCeSOpZAc0+Dn8vSvN+RHUmd7gXWo5cSXROdbzLhtT83Tzh/tl0dfNd6I7+//D75TB6vKSMnF921Gt1OkB29orcpfiGcS0ibDi8By5Xy1IEq/3DLbUNKAJ38yvdagfMHVoFlITKztKyx00vtL3Vhq6d/+p7XPkb1pJA2EvTvWJI8J5fq5UyFJ6V/gxgqk=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM8586CWFNOaDluakc5a5Mj5ccpeoURPnbi800rdSC11
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB3iz1XalQfQsvPdIq2L/sK4J7E2PFRYviaI0Y8WL6ihsqpbSqR/q+QE3EwzZARmbL5is6sKoBExWB+qAZZw/mw=
                                              create=True mode=0644 path=/tmp/ansible.9tq3p7i0 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:06 compute-0 sudo[133265]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 compute-0 sudo[133295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:55:06 compute-0 sudo[133295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:06 compute-0 sudo[133295]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 compute-0 ceph-mon[74418]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000000c:nfs.cephfs.2: -2
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 09:55:06 compute-0 sudo[133457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irtcbqtzvtwgwqewsyunznyeurkqfthh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928506.579469-196-84431353849431/AnsiballZ_command.py'
Dec 05 09:55:06 compute-0 sudo[133457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:06.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:55:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:07 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1714000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:07 compute-0 python3.9[133459]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.9tq3p7i0' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:55:07 compute-0 sudo[133457]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:07 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec 05 09:55:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:07.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:07 compute-0 sudo[133615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrkdzmndcgeufmbshckwfrnnyzyzqdqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928507.4539466-220-162207713164866/AnsiballZ_file.py'
Dec 05 09:55:07 compute-0 sudo[133615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:08 compute-0 python3.9[133617]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.9tq3p7i0 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:08 compute-0 sudo[133615]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:08.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:08 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:08 compute-0 sshd-session[132223]: Connection closed by 192.168.122.30 port 52962
Dec 05 09:55:08 compute-0 sshd-session[132219]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:55:08 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Dec 05 09:55:08 compute-0 systemd[1]: session-45.scope: Consumed 4.940s CPU time.
Dec 05 09:55:08 compute-0 systemd-logind[789]: Session 45 logged out. Waiting for processes to exit.
Dec 05 09:55:08 compute-0 systemd-logind[789]: Removed session 45.
Dec 05 09:55:08 compute-0 ceph-mon[74418]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec 05 09:55:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:08.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:55:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:08.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:55:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:09 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:09 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 05 09:55:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:09.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:10.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095510 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:55:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:10 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:10 compute-0 ceph-mon[74418]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 05 09:55:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:11 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:11 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Dec 05 09:55:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:11.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:12.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:12 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:55:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:55:12 compute-0 ceph-mon[74418]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Dec 05 09:55:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:55:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:13 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:13 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Dec 05 09:55:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:13.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:14.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:14 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:14 compute-0 ceph-mon[74418]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Dec 05 09:55:14 compute-0 sshd-session[133650]: Accepted publickey for zuul from 192.168.122.30 port 44028 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:55:14 compute-0 systemd-logind[789]: New session 46 of user zuul.
Dec 05 09:55:14 compute-0 systemd[1]: Started Session 46 of User zuul.
Dec 05 09:55:14 compute-0 sshd-session[133650]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:55:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:15 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:15 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 05 09:55:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:15.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:15] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:55:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:15] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:55:15 compute-0 python3.9[133803]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:55:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:16.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:16 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:16 compute-0 ceph-mon[74418]: pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 05 09:55:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:16.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:55:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:17 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:17 compute-0 sudo[133959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyrzzcbvxcilulxtwaeyzqaebmzailme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928516.4604318-56-259710916246185/AnsiballZ_systemd.py'
Dec 05 09:55:17 compute-0 sudo[133959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:17 compute-0 python3.9[133961]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 09:55:17 compute-0 sudo[133959]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:17 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 05 09:55:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:55:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:17.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:55:18 compute-0 sudo[134113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwtkiwskxujnsxwxxioqftaxlylbjcpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928517.7682965-80-204336137430070/AnsiballZ_systemd.py'
Dec 05 09:55:18 compute-0 sudo[134113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:18 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:55:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:18.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:55:18 compute-0 python3.9[134115]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:55:18 compute-0 sudo[134113]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:18.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:55:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:18.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:55:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:19 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:19 compute-0 sudo[134268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shfsflgfqbwqxehecjsfqzzwtfjvtexr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928518.7664423-107-243521344536577/AnsiballZ_command.py'
Dec 05 09:55:19 compute-0 sudo[134268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:19 compute-0 python3.9[134270]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:55:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:19 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:19 compute-0 sudo[134268]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Dec 05 09:55:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:55:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:19.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:55:19 compute-0 ceph-mon[74418]: pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 05 09:55:20 compute-0 sudo[134422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwmcshkhorrqhzgzydtnrnvnpktvjjdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928519.7609503-131-25359517967504/AnsiballZ_stat.py'
Dec 05 09:55:20 compute-0 sudo[134422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:20 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:20.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:20 compute-0 python3.9[134424]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:55:20 compute-0 sudo[134422]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:21 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:21 compute-0 ceph-mon[74418]: pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Dec 05 09:55:21 compute-0 sudo[134575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdqurejivnooifskodkwabugitxmoxgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928520.714285-158-230060290108215/AnsiballZ_file.py'
Dec 05 09:55:21 compute-0 sudo[134575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:21 compute-0 python3.9[134577]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:21 compute-0 sudo[134575]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:21 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:21.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:21 compute-0 sshd-session[133653]: Connection closed by 192.168.122.30 port 44028
Dec 05 09:55:21 compute-0 sshd-session[133650]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:55:21 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Dec 05 09:55:21 compute-0 systemd[1]: session-46.scope: Consumed 3.906s CPU time.
Dec 05 09:55:21 compute-0 systemd-logind[789]: Session 46 logged out. Waiting for processes to exit.
Dec 05 09:55:21 compute-0 systemd-logind[789]: Removed session 46.
Dec 05 09:55:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:22 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:55:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:22.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:55:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:23 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:23 compute-0 ceph-mon[74418]: pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:23 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:23.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:24 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:24.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:25 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:25 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:25.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:25] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:55:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:25] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:55:25 compute-0 ceph-mon[74418]: pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:26 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:26.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:26 compute-0 sudo[134608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:55:26 compute-0 sudo[134608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:26 compute-0 sudo[134608]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:26.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:55:27 compute-0 ceph-mon[74418]: pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:55:27
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'backups', '.mgr', 'vms', '.nfs', 'cephfs.cephfs.meta']
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:55:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:55:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 09:55:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:27.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:55:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:55:27 compute-0 sshd-session[134633]: Accepted publickey for zuul from 192.168.122.30 port 44114 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:55:27 compute-0 systemd-logind[789]: New session 47 of user zuul.
Dec 05 09:55:27 compute-0 systemd[1]: Started Session 47 of User zuul.
Dec 05 09:55:27 compute-0 sshd-session[134633]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:55:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:55:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:28 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:28.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:28 compute-0 python3.9[134788]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:55:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:28.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:55:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:29 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:29 compute-0 ceph-mon[74418]: pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:29 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:55:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:29.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:29 compute-0 sudo[134942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibplyyrpwgesyrnevumuskkxbethheto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928529.49596-62-95537887722519/AnsiballZ_setup.py'
Dec 05 09:55:29 compute-0 sudo[134942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:30 compute-0 python3.9[134944]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:55:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:30 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:55:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:30.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:55:30 compute-0 sudo[134942]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:30 compute-0 sudo[135028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpedsrtdwewmfdazbfqscacpgyaresvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928529.49596-62-95537887722519/AnsiballZ_dnf.py'
Dec 05 09:55:30 compute-0 sudo[135028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:30 compute-0 python3.9[135030]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:55:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:31 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:31 compute-0 ceph-mon[74418]: pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:55:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:31 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:31.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:32 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:32.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:32 compute-0 sudo[135028]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:33 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:33 compute-0 ceph-mon[74418]: pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:33 compute-0 python3.9[135183]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:55:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:33 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:33.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:34 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:34.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:34 compute-0 python3.9[135336]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:55:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:35 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:35 compute-0 ceph-mon[74418]: pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:35 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:35.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:35] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:55:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:35] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:55:35 compute-0 python3.9[135486]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:55:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:36 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:55:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:36.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:55:36 compute-0 python3.9[135637]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:55:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:36.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:55:37 compute-0 sshd-session[134636]: Connection closed by 192.168.122.30 port 44114
Dec 05 09:55:37 compute-0 sshd-session[134633]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:55:37 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Dec 05 09:55:37 compute-0 systemd[1]: session-47.scope: Consumed 6.089s CPU time.
Dec 05 09:55:37 compute-0 systemd-logind[789]: Session 47 logged out. Waiting for processes to exit.
Dec 05 09:55:37 compute-0 systemd-logind[789]: Removed session 47.
Dec 05 09:55:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:37 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:37 compute-0 ceph-mon[74418]: pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:55:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Cumulative writes: 8370 writes, 33K keys, 8370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 8370 writes, 1809 syncs, 4.63 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8370 writes, 33K keys, 8370 commit groups, 1.0 writes per commit group, ingest: 20.82 MB, 0.03 MB/s
                                           Interval WAL: 8370 writes, 1809 syncs, 4.63 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.8 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 05 09:55:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:37 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:37.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095537 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:55:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:38 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:38.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:38.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:55:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:39 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:39 compute-0 ceph-mon[74418]: pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:55:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:39 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 09:55:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:39.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:40 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:40 compute-0 ceph-mon[74418]: pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 09:55:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:41 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:41 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c0013a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:55:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:41.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:42 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:55:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:42.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:55:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:55:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:55:42 compute-0 sshd-session[135672]: Accepted publickey for zuul from 192.168.122.30 port 60858 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:55:42 compute-0 systemd-logind[789]: New session 48 of user zuul.
Dec 05 09:55:42 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec 05 09:55:42 compute-0 sshd-session[135672]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:55:42 compute-0 ceph-mon[74418]: pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:55:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:55:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:43 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:43 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:55:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:43.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:43 compute-0 python3.9[135825]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:55:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:44 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:44.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:44 compute-0 ceph-mon[74418]: pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:55:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:45 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:45 compute-0 sudo[135981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcyjmtanozejgkamxdrflyeoxjpvvcqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928544.8409672-109-204239003811738/AnsiballZ_file.py'
Dec 05 09:55:45 compute-0 sudo[135981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:45 compute-0 python3.9[135983]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:55:45 compute-0 sudo[135981]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:45 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:55:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:45.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:45] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec 05 09:55:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:45] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec 05 09:55:45 compute-0 sudo[136133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjqmbhyzrkbmmxeqithledegesaeccfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928545.6520119-109-28254387481411/AnsiballZ_file.py'
Dec 05 09:55:45 compute-0 sudo[136133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:46 compute-0 python3.9[136135]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:55:46 compute-0 sudo[136133]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:46 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:46.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:46 compute-0 sudo[136237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:55:46 compute-0 sudo[136237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:46 compute-0 sudo[136237]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:46 compute-0 sudo[136312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oajcygvatsaxmwobpgabszgcxaheltbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928546.3523145-153-137287706805656/AnsiballZ_stat.py'
Dec 05 09:55:46 compute-0 sudo[136312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:46 compute-0 python3.9[136314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:55:46 compute-0 sudo[136312]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:46.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:55:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:47 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:47 compute-0 sudo[136435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhbhugqeokbzrlvzgtxlqeqjwhqdogwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928546.3523145-153-137287706805656/AnsiballZ_copy.py'
Dec 05 09:55:47 compute-0 sudo[136435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:47 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:55:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:47.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:47 compute-0 python3.9[136437]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928546.3523145-153-137287706805656/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8b341908e770d164a71d7c59234cb6d092599912 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:47 compute-0 sudo[136435]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:47 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:55:47 compute-0 ceph-mon[74418]: pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:55:48 compute-0 sudo[136588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpzslnatxfoaambqrzcdlzjjointewdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928547.8606977-153-156852895482662/AnsiballZ_stat.py'
Dec 05 09:55:48 compute-0 sudo[136588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:48 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:48 compute-0 python3.9[136590]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:55:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:48.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:48 compute-0 sudo[136588]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:48 compute-0 sudo[136712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prxoyadttmocrvxqylysxosifngpjoje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928547.8606977-153-156852895482662/AnsiballZ_copy.py'
Dec 05 09:55:48 compute-0 sudo[136712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:48.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:55:48 compute-0 python3.9[136714]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928547.8606977-153-156852895482662/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9267c9ccba711e730e76a6ac36838f75140dd71c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:48 compute-0 ceph-mon[74418]: pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:55:48 compute-0 sudo[136712]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:49 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:49 compute-0 sudo[136864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdaomaqawilgchplmzxkmmoparblruqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928549.0875945-153-199555188584033/AnsiballZ_stat.py'
Dec 05 09:55:49 compute-0 sudo[136864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:49 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 09:55:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:49.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:49 compute-0 python3.9[136866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:55:49 compute-0 sudo[136864]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:49 compute-0 sudo[136987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oafcmqstbxibjsnsjwpkjxfalaihztgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928549.0875945-153-199555188584033/AnsiballZ_copy.py'
Dec 05 09:55:49 compute-0 sudo[136987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:50 compute-0 python3.9[136989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928549.0875945-153-199555188584033/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3c923225688a4032ffb8cd0d6fb10dc7273f6927 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:50 compute-0 sudo[136987]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:50 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:55:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:50.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:55:50 compute-0 sudo[137141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqpsaezwveifmmgnisfuznwhxzehmclp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928550.3782964-269-280716028784072/AnsiballZ_file.py'
Dec 05 09:55:50 compute-0 sudo[137141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:50 compute-0 sudo[137144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:50 compute-0 sudo[137144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:50 compute-0 sudo[137144]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:50 compute-0 sudo[137169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:55:50 compute-0 sudo[137169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:50 compute-0 python3.9[137143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:55:50 compute-0 sudo[137141]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:50 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:55:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:50 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:55:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:51 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:51 compute-0 ceph-mon[74418]: pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 09:55:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 09:55:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 09:55:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:51 compute-0 sudo[137360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvuarfwoojgrudqcxperpohygbvpegnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928551.0408149-269-122638145997971/AnsiballZ_file.py'
Dec 05 09:55:51 compute-0 sudo[137360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:51 compute-0 sudo[137169]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:51 compute-0 python3.9[137365]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:55:51 compute-0 sudo[137360]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:51 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 09:55:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:51.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:55:51 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:55:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:55:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:55:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 689 B/s wr, 2 op/s
Dec 05 09:55:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:55:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:55:52 compute-0 sudo[137524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueujrigkvrlzpjrvssgnviymvgsutxkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928551.7258847-312-267278724213157/AnsiballZ_stat.py'
Dec 05 09:55:52 compute-0 sudo[137524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:55:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:55:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:55:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:55:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:55:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:55:52 compute-0 sudo[137528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:52 compute-0 sudo[137528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:52 compute-0 sudo[137528]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:52 compute-0 sudo[137553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:55:52 compute-0 sudo[137553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:52 compute-0 python3.9[137526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:55:52 compute-0 sudo[137524]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:55:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:55:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:55:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:55:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.283440) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928552283594, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1180, "num_deletes": 251, "total_data_size": 2232363, "memory_usage": 2260576, "flush_reason": "Manual Compaction"}
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928552302767, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 2173218, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12145, "largest_seqno": 13324, "table_properties": {"data_size": 2167531, "index_size": 3078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11798, "raw_average_key_size": 19, "raw_value_size": 2156171, "raw_average_value_size": 3552, "num_data_blocks": 133, "num_entries": 607, "num_filter_entries": 607, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928444, "oldest_key_time": 1764928444, "file_creation_time": 1764928552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 19363 microseconds, and 6905 cpu microseconds.
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.302817) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 2173218 bytes OK
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.302844) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.304762) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.304775) EVENT_LOG_v1 {"time_micros": 1764928552304772, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.304791) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 09:55:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:52 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2227092, prev total WAL file size 2227092, number of live WAL files 2.
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.305702) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(2122KB)], [29(12MB)]
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928552305877, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15781300, "oldest_snapshot_seqno": -1}
Dec 05 09:55:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:52.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4345 keys, 13469048 bytes, temperature: kUnknown
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928552532579, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 13469048, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13436943, "index_size": 20155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 111374, "raw_average_key_size": 25, "raw_value_size": 13354411, "raw_average_value_size": 3073, "num_data_blocks": 850, "num_entries": 4345, "num_filter_entries": 4345, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764928552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.533051) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13469048 bytes
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.538829) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 69.6 rd, 59.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 13.0 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(13.5) write-amplify(6.2) OK, records in: 4865, records dropped: 520 output_compression: NoCompression
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.538884) EVENT_LOG_v1 {"time_micros": 1764928552538866, "job": 12, "event": "compaction_finished", "compaction_time_micros": 226788, "compaction_time_cpu_micros": 71519, "output_level": 6, "num_output_files": 1, "total_output_size": 13469048, "num_input_records": 4865, "num_output_records": 4345, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928552539479, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928552542909, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.305546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.542987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.542994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.542995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.542997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:55:52 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-09:55:52.542999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:55:52 compute-0 podman[137706]: 2025-12-05 09:55:52.570504713 +0000 UTC m=+0.055046269 container create c4b50a8ba43cf2193eaec26322c2f6bcba348abd906dd7f5b8fa5a2b997b57ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 05 09:55:52 compute-0 sudo[137752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdqpvrszcwrhcfjysicqmjczysnufvhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928551.7258847-312-267278724213157/AnsiballZ_copy.py'
Dec 05 09:55:52 compute-0 sudo[137752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:52 compute-0 systemd[1]: Started libpod-conmon-c4b50a8ba43cf2193eaec26322c2f6bcba348abd906dd7f5b8fa5a2b997b57ac.scope.
Dec 05 09:55:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:55:52 compute-0 podman[137706]: 2025-12-05 09:55:52.549443913 +0000 UTC m=+0.033985469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:55:52 compute-0 podman[137706]: 2025-12-05 09:55:52.657431762 +0000 UTC m=+0.141973338 container init c4b50a8ba43cf2193eaec26322c2f6bcba348abd906dd7f5b8fa5a2b997b57ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:55:52 compute-0 podman[137706]: 2025-12-05 09:55:52.666088186 +0000 UTC m=+0.150629712 container start c4b50a8ba43cf2193eaec26322c2f6bcba348abd906dd7f5b8fa5a2b997b57ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:55:52 compute-0 podman[137706]: 2025-12-05 09:55:52.670110054 +0000 UTC m=+0.154651640 container attach c4b50a8ba43cf2193eaec26322c2f6bcba348abd906dd7f5b8fa5a2b997b57ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 05 09:55:52 compute-0 loving_galois[137757]: 167 167
Dec 05 09:55:52 compute-0 systemd[1]: libpod-c4b50a8ba43cf2193eaec26322c2f6bcba348abd906dd7f5b8fa5a2b997b57ac.scope: Deactivated successfully.
Dec 05 09:55:52 compute-0 podman[137706]: 2025-12-05 09:55:52.676652851 +0000 UTC m=+0.161194387 container died c4b50a8ba43cf2193eaec26322c2f6bcba348abd906dd7f5b8fa5a2b997b57ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 09:55:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f44f97ac0b33df82654dc89017530be432244aa0c76bd5892e692ac1da8fd44-merged.mount: Deactivated successfully.
Dec 05 09:55:52 compute-0 podman[137706]: 2025-12-05 09:55:52.727218018 +0000 UTC m=+0.211759564 container remove c4b50a8ba43cf2193eaec26322c2f6bcba348abd906dd7f5b8fa5a2b997b57ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:55:52 compute-0 systemd[1]: libpod-conmon-c4b50a8ba43cf2193eaec26322c2f6bcba348abd906dd7f5b8fa5a2b997b57ac.scope: Deactivated successfully.
Dec 05 09:55:52 compute-0 python3.9[137756]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928551.7258847-312-267278724213157/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=322af90fd8d3fe943be0d3ad4fbc6b1261e9e970 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:52 compute-0 sudo[137752]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:52 compute-0 podman[137784]: 2025-12-05 09:55:52.926560944 +0000 UTC m=+0.065714706 container create c26ad2da98e8a093b5e5594b842ac8a4ac246f3af91ca0e88377c64267bf351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_murdock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:55:52 compute-0 systemd[1]: Started libpod-conmon-c26ad2da98e8a093b5e5594b842ac8a4ac246f3af91ca0e88377c64267bf351f.scope.
Dec 05 09:55:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f2dc5d9bed89f6404c0757cae5aa19a5f2dfae34680398896aebd61a7bb42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f2dc5d9bed89f6404c0757cae5aa19a5f2dfae34680398896aebd61a7bb42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f2dc5d9bed89f6404c0757cae5aa19a5f2dfae34680398896aebd61a7bb42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f2dc5d9bed89f6404c0757cae5aa19a5f2dfae34680398896aebd61a7bb42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f2dc5d9bed89f6404c0757cae5aa19a5f2dfae34680398896aebd61a7bb42/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:52 compute-0 podman[137784]: 2025-12-05 09:55:52.996118824 +0000 UTC m=+0.135272626 container init c26ad2da98e8a093b5e5594b842ac8a4ac246f3af91ca0e88377c64267bf351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:55:53 compute-0 podman[137784]: 2025-12-05 09:55:52.906577495 +0000 UTC m=+0.045731287 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:55:53 compute-0 podman[137784]: 2025-12-05 09:55:53.006532166 +0000 UTC m=+0.145685948 container start c26ad2da98e8a093b5e5594b842ac8a4ac246f3af91ca0e88377c64267bf351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_murdock, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:55:53 compute-0 podman[137784]: 2025-12-05 09:55:53.010735709 +0000 UTC m=+0.149889491 container attach c26ad2da98e8a093b5e5594b842ac8a4ac246f3af91ca0e88377c64267bf351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 09:55:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:53 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:53 compute-0 sudo[137954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waqlbhnawvzaftnpmnjcztqmrnbtadmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928552.968126-312-110134412570155/AnsiballZ_stat.py'
Dec 05 09:55:53 compute-0 sudo[137954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:53 compute-0 ceph-mon[74418]: pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 09:55:53 compute-0 ceph-mon[74418]: pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 689 B/s wr, 2 op/s
Dec 05 09:55:53 compute-0 gracious_murdock[137844]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:55:53 compute-0 gracious_murdock[137844]: --> All data devices are unavailable
Dec 05 09:55:53 compute-0 podman[137784]: 2025-12-05 09:55:53.411199481 +0000 UTC m=+0.550353273 container died c26ad2da98e8a093b5e5594b842ac8a4ac246f3af91ca0e88377c64267bf351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_murdock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 09:55:53 compute-0 systemd[1]: libpod-c26ad2da98e8a093b5e5594b842ac8a4ac246f3af91ca0e88377c64267bf351f.scope: Deactivated successfully.
Dec 05 09:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e4f2dc5d9bed89f6404c0757cae5aa19a5f2dfae34680398896aebd61a7bb42-merged.mount: Deactivated successfully.
Dec 05 09:55:53 compute-0 podman[137784]: 2025-12-05 09:55:53.466495445 +0000 UTC m=+0.605649227 container remove c26ad2da98e8a093b5e5594b842ac8a4ac246f3af91ca0e88377c64267bf351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 09:55:53 compute-0 systemd[1]: libpod-conmon-c26ad2da98e8a093b5e5594b842ac8a4ac246f3af91ca0e88377c64267bf351f.scope: Deactivated successfully.
Dec 05 09:55:53 compute-0 sudo[137553]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:53 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:53 compute-0 python3.9[137959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:55:53 compute-0 sudo[137975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:53 compute-0 sudo[137975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:53 compute-0 sudo[137975]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:53 compute-0 sudo[137954]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:53.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:53 compute-0 sudo[138000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:55:53 compute-0 sudo[138000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:53 compute-0 sudo[138170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reqksengulkmmmndbfgjtbymoygilkej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928552.968126-312-110134412570155/AnsiballZ_copy.py'
Dec 05 09:55:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 09:55:53 compute-0 sudo[138170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:53 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 09:55:54 compute-0 podman[138188]: 2025-12-05 09:55:54.047168916 +0000 UTC m=+0.048966473 container create 68c4bcd4775ea74f41d58fd50a2ec923bd8e1ab55e5b088ef6e7fc4e814d4c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:55:54 compute-0 systemd[1]: Started libpod-conmon-68c4bcd4775ea74f41d58fd50a2ec923bd8e1ab55e5b088ef6e7fc4e814d4c0e.scope.
Dec 05 09:55:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:55:54 compute-0 podman[138188]: 2025-12-05 09:55:54.032005987 +0000 UTC m=+0.033803544 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:55:54 compute-0 podman[138188]: 2025-12-05 09:55:54.128447573 +0000 UTC m=+0.130245190 container init 68c4bcd4775ea74f41d58fd50a2ec923bd8e1ab55e5b088ef6e7fc4e814d4c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wu, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 09:55:54 compute-0 podman[138188]: 2025-12-05 09:55:54.135606287 +0000 UTC m=+0.137403844 container start 68c4bcd4775ea74f41d58fd50a2ec923bd8e1ab55e5b088ef6e7fc4e814d4c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:55:54 compute-0 infallible_wu[138203]: 167 167
Dec 05 09:55:54 compute-0 systemd[1]: libpod-68c4bcd4775ea74f41d58fd50a2ec923bd8e1ab55e5b088ef6e7fc4e814d4c0e.scope: Deactivated successfully.
Dec 05 09:55:54 compute-0 podman[138188]: 2025-12-05 09:55:54.140048456 +0000 UTC m=+0.141846043 container attach 68c4bcd4775ea74f41d58fd50a2ec923bd8e1ab55e5b088ef6e7fc4e814d4c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wu, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 05 09:55:54 compute-0 conmon[138203]: conmon 68c4bcd4775ea74f41d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68c4bcd4775ea74f41d58fd50a2ec923bd8e1ab55e5b088ef6e7fc4e814d4c0e.scope/container/memory.events
Dec 05 09:55:54 compute-0 podman[138188]: 2025-12-05 09:55:54.141367382 +0000 UTC m=+0.143164939 container died 68c4bcd4775ea74f41d58fd50a2ec923bd8e1ab55e5b088ef6e7fc4e814d4c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:55:54 compute-0 python3.9[138184]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928552.968126-312-110134412570155/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9d982f77b67fd359f39a3d6624db62dd4d29195b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-22eb4bd55ac012e899b75a5b511072d1401eb12f16bead1130391a03555a57f4-merged.mount: Deactivated successfully.
Dec 05 09:55:54 compute-0 sudo[138170]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:54 compute-0 podman[138188]: 2025-12-05 09:55:54.185691809 +0000 UTC m=+0.187489366 container remove 68c4bcd4775ea74f41d58fd50a2ec923bd8e1ab55e5b088ef6e7fc4e814d4c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 09:55:54 compute-0 systemd[1]: libpod-conmon-68c4bcd4775ea74f41d58fd50a2ec923bd8e1ab55e5b088ef6e7fc4e814d4c0e.scope: Deactivated successfully.
Dec 05 09:55:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:54 compute-0 podman[138251]: 2025-12-05 09:55:54.32410221 +0000 UTC m=+0.039393065 container create 3e361d7cee50f65b27e5e5640f2e88ac278ab3ebd7df76eea56a758f72a131fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:55:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:55:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:54.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:55:54 compute-0 systemd[1]: Started libpod-conmon-3e361d7cee50f65b27e5e5640f2e88ac278ab3ebd7df76eea56a758f72a131fb.scope.
Dec 05 09:55:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d3946375527dc48c454d9f4aed5278ec14148afc76ab9286712b2f654d217d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d3946375527dc48c454d9f4aed5278ec14148afc76ab9286712b2f654d217d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d3946375527dc48c454d9f4aed5278ec14148afc76ab9286712b2f654d217d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d3946375527dc48c454d9f4aed5278ec14148afc76ab9286712b2f654d217d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:54 compute-0 podman[138251]: 2025-12-05 09:55:54.399666242 +0000 UTC m=+0.114957117 container init 3e361d7cee50f65b27e5e5640f2e88ac278ab3ebd7df76eea56a758f72a131fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 09:55:54 compute-0 podman[138251]: 2025-12-05 09:55:54.306726441 +0000 UTC m=+0.022017316 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:55:54 compute-0 podman[138251]: 2025-12-05 09:55:54.407425732 +0000 UTC m=+0.122716577 container start 3e361d7cee50f65b27e5e5640f2e88ac278ab3ebd7df76eea56a758f72a131fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 09:55:54 compute-0 podman[138251]: 2025-12-05 09:55:54.41106915 +0000 UTC m=+0.126360035 container attach 3e361d7cee50f65b27e5e5640f2e88ac278ab3ebd7df76eea56a758f72a131fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 09:55:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:54 compute-0 sudo[138399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzazkeystdsxfbajusbriziltjiuyhbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928554.3197136-312-130779679940731/AnsiballZ_stat.py'
Dec 05 09:55:54 compute-0 sudo[138399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:54 compute-0 cool_mclaren[138313]: {
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:     "1": [
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:         {
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             "devices": [
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "/dev/loop3"
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             ],
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             "lv_name": "ceph_lv0",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             "lv_size": "21470642176",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             "name": "ceph_lv0",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             "tags": {
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.cluster_name": "ceph",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.crush_device_class": "",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.encrypted": "0",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.osd_id": "1",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.type": "block",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.vdo": "0",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:                 "ceph.with_tpm": "0"
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             },
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             "type": "block",
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:             "vg_name": "ceph_vg0"
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:         }
Dec 05 09:55:54 compute-0 cool_mclaren[138313]:     ]
Dec 05 09:55:54 compute-0 cool_mclaren[138313]: }
Dec 05 09:55:54 compute-0 systemd[1]: libpod-3e361d7cee50f65b27e5e5640f2e88ac278ab3ebd7df76eea56a758f72a131fb.scope: Deactivated successfully.
Dec 05 09:55:54 compute-0 podman[138251]: 2025-12-05 09:55:54.738751095 +0000 UTC m=+0.454041940 container died 3e361d7cee50f65b27e5e5640f2e88ac278ab3ebd7df76eea56a758f72a131fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 05 09:55:54 compute-0 python3.9[138403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:55:54 compute-0 sudo[138399]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:55 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:55 compute-0 ceph-mon[74418]: pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 09:55:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8d3946375527dc48c454d9f4aed5278ec14148afc76ab9286712b2f654d217d-merged.mount: Deactivated successfully.
Dec 05 09:55:55 compute-0 podman[138251]: 2025-12-05 09:55:55.248406667 +0000 UTC m=+0.963697522 container remove 3e361d7cee50f65b27e5e5640f2e88ac278ab3ebd7df76eea56a758f72a131fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:55:55 compute-0 systemd[1]: libpod-conmon-3e361d7cee50f65b27e5e5640f2e88ac278ab3ebd7df76eea56a758f72a131fb.scope: Deactivated successfully.
Dec 05 09:55:55 compute-0 sudo[138000]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:55 compute-0 sudo[138517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:55 compute-0 sudo[138562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfinhgwtnafjsgurhoqkpjevjcmqfkcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928554.3197136-312-130779679940731/AnsiballZ_copy.py'
Dec 05 09:55:55 compute-0 sudo[138517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:55 compute-0 sudo[138562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:55 compute-0 sudo[138517]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:55 compute-0 sudo[138567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:55:55 compute-0 sudo[138567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:55 compute-0 python3.9[138565]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928554.3197136-312-130779679940731/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0c3d698bd19304f3223e15a6b1bc23cad766299f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:55 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:55 compute-0 sudo[138562]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:55:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:55.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:55:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:55] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec 05 09:55:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:55:55] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec 05 09:55:55 compute-0 podman[138652]: 2025-12-05 09:55:55.771946285 +0000 UTC m=+0.039139659 container create 829be1e6835875e25738b1353541c351cf5ac0c633a633edc74af82764f4ece9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:55:55 compute-0 systemd[1]: Started libpod-conmon-829be1e6835875e25738b1353541c351cf5ac0c633a633edc74af82764f4ece9.scope.
Dec 05 09:55:55 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:55:55 compute-0 podman[138652]: 2025-12-05 09:55:55.754502983 +0000 UTC m=+0.021696387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:55:55 compute-0 podman[138652]: 2025-12-05 09:55:55.860560659 +0000 UTC m=+0.127754053 container init 829be1e6835875e25738b1353541c351cf5ac0c633a633edc74af82764f4ece9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 09:55:55 compute-0 podman[138652]: 2025-12-05 09:55:55.867869467 +0000 UTC m=+0.135062841 container start 829be1e6835875e25738b1353541c351cf5ac0c633a633edc74af82764f4ece9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:55:55 compute-0 podman[138652]: 2025-12-05 09:55:55.871093234 +0000 UTC m=+0.138286618 container attach 829be1e6835875e25738b1353541c351cf5ac0c633a633edc74af82764f4ece9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 09:55:55 compute-0 awesome_leavitt[138719]: 167 167
Dec 05 09:55:55 compute-0 systemd[1]: libpod-829be1e6835875e25738b1353541c351cf5ac0c633a633edc74af82764f4ece9.scope: Deactivated successfully.
Dec 05 09:55:55 compute-0 podman[138652]: 2025-12-05 09:55:55.87278368 +0000 UTC m=+0.139977064 container died 829be1e6835875e25738b1353541c351cf5ac0c633a633edc74af82764f4ece9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leavitt, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 09:55:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fa4a46aebb552f545fb976ac43d0b22582ad772ad6d8bc50f4eb23b0e169d3b-merged.mount: Deactivated successfully.
Dec 05 09:55:55 compute-0 podman[138652]: 2025-12-05 09:55:55.910372595 +0000 UTC m=+0.177565969 container remove 829be1e6835875e25738b1353541c351cf5ac0c633a633edc74af82764f4ece9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leavitt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 05 09:55:55 compute-0 systemd[1]: libpod-conmon-829be1e6835875e25738b1353541c351cf5ac0c633a633edc74af82764f4ece9.scope: Deactivated successfully.
Dec 05 09:55:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 09:55:56 compute-0 sudo[138811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezdarciusuhsxdfbyrxthbughnsckdmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928555.7728453-449-74403169176103/AnsiballZ_file.py'
Dec 05 09:55:56 compute-0 sudo[138811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:56 compute-0 podman[138817]: 2025-12-05 09:55:56.051749816 +0000 UTC m=+0.036819047 container create e3e588346cf1a63c13cd3d92d06f134a8452f6585b6d0021b30269d4659dae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shannon, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 09:55:56 compute-0 systemd[1]: Started libpod-conmon-e3e588346cf1a63c13cd3d92d06f134a8452f6585b6d0021b30269d4659dae8f.scope.
Dec 05 09:55:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef9dc5059fa3709432dcac154fbfe3ea83ef7d1eda17f1d7607c6419e1a2260/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef9dc5059fa3709432dcac154fbfe3ea83ef7d1eda17f1d7607c6419e1a2260/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef9dc5059fa3709432dcac154fbfe3ea83ef7d1eda17f1d7607c6419e1a2260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef9dc5059fa3709432dcac154fbfe3ea83ef7d1eda17f1d7607c6419e1a2260/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:55:56 compute-0 podman[138817]: 2025-12-05 09:55:56.036895534 +0000 UTC m=+0.021964785 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:55:56 compute-0 podman[138817]: 2025-12-05 09:55:56.146338931 +0000 UTC m=+0.131408192 container init e3e588346cf1a63c13cd3d92d06f134a8452f6585b6d0021b30269d4659dae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:55:56 compute-0 podman[138817]: 2025-12-05 09:55:56.155291064 +0000 UTC m=+0.140360295 container start e3e588346cf1a63c13cd3d92d06f134a8452f6585b6d0021b30269d4659dae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shannon, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 09:55:56 compute-0 podman[138817]: 2025-12-05 09:55:56.15886782 +0000 UTC m=+0.143937051 container attach e3e588346cf1a63c13cd3d92d06f134a8452f6585b6d0021b30269d4659dae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shannon, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 05 09:55:56 compute-0 python3.9[138819]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:55:56 compute-0 sudo[138811]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:56 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:55:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:56.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:55:56 compute-0 sudo[139039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kszrfkuboablxzuypefrdwfauwtwrolu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928556.35737-449-161000030976312/AnsiballZ_file.py'
Dec 05 09:55:56 compute-0 sudo[139039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:56 compute-0 lvm[139062]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:55:56 compute-0 lvm[139062]: VG ceph_vg0 finished
Dec 05 09:55:56 compute-0 python3.9[139047]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:55:56 compute-0 sudo[139039]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:56 compute-0 objective_shannon[138835]: {}
Dec 05 09:55:56 compute-0 systemd[1]: libpod-e3e588346cf1a63c13cd3d92d06f134a8452f6585b6d0021b30269d4659dae8f.scope: Deactivated successfully.
Dec 05 09:55:56 compute-0 systemd[1]: libpod-e3e588346cf1a63c13cd3d92d06f134a8452f6585b6d0021b30269d4659dae8f.scope: Consumed 1.126s CPU time.
Dec 05 09:55:56 compute-0 podman[138817]: 2025-12-05 09:55:56.866600785 +0000 UTC m=+0.851670016 container died e3e588346cf1a63c13cd3d92d06f134a8452f6585b6d0021b30269d4659dae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shannon, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 09:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ef9dc5059fa3709432dcac154fbfe3ea83ef7d1eda17f1d7607c6419e1a2260-merged.mount: Deactivated successfully.
Dec 05 09:55:56 compute-0 podman[138817]: 2025-12-05 09:55:56.916093572 +0000 UTC m=+0.901162793 container remove e3e588346cf1a63c13cd3d92d06f134a8452f6585b6d0021b30269d4659dae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shannon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 09:55:56 compute-0 systemd[1]: libpod-conmon-e3e588346cf1a63c13cd3d92d06f134a8452f6585b6d0021b30269d4659dae8f.scope: Deactivated successfully.
Dec 05 09:55:56 compute-0 sudo[138567]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:55:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:55:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:56.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:55:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:56.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:55:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:57 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:57 compute-0 sudo[139127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:55:57 compute-0 sudo[139127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:55:57 compute-0 sudo[139127]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:57 compute-0 ceph-mon[74418]: pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 09:55:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:55:57 compute-0 sudo[139253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esepsdsjtomcvfuvacueghqrvpoofbdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928556.9997365-491-266274812088018/AnsiballZ_stat.py'
Dec 05 09:55:57 compute-0 sudo[139253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:57 compute-0 python3.9[139255]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:55:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:55:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:55:57 compute-0 sudo[139253]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:57 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:55:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:55:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:57.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:55:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:55:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:55:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:55:57 compute-0 sudo[139376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnchakixbfiddgyzmrkhykekfnshvcoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928556.9997365-491-266274812088018/AnsiballZ_copy.py'
Dec 05 09:55:57 compute-0 sudo[139376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 09:55:58 compute-0 python3.9[139378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928556.9997365-491-266274812088018/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=447685d5be00648067c34764ef54618210e71d74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:58 compute-0 sudo[139376]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:55:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:58 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:55:58.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:58 compute-0 sudo[139530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfnxtqwxtalpwaidnuodjmvnbbxdfjeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928558.2056048-491-149460900944561/AnsiballZ_stat.py'
Dec 05 09:55:58 compute-0 sudo[139530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:58 compute-0 python3.9[139532]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:55:58 compute-0 sudo[139530]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:58.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:55:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:55:58.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:55:58 compute-0 sudo[139653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azpdyesedirmkopgzjarwmabbsnolimi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928558.2056048-491-149460900944561/AnsiballZ_copy.py'
Dec 05 09:55:58 compute-0 sudo[139653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:59 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:59 compute-0 python3.9[139655]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928558.2056048-491-149460900944561/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9d982f77b67fd359f39a3d6624db62dd4d29195b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:55:59 compute-0 sudo[139653]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:59 compute-0 ceph-mon[74418]: pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 09:55:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:55:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:55:59 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:55:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:55:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:55:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:55:59.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:55:59 compute-0 sudo[139805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvrpqjwwvzdtewvwvldfxvzipebbdmna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928559.3642805-491-30495232164883/AnsiballZ_stat.py'
Dec 05 09:55:59 compute-0 sudo[139805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:55:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095559 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:55:59 compute-0 python3.9[139807]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:55:59 compute-0 sudo[139805]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 492 B/s wr, 2 op/s
Dec 05 09:56:00 compute-0 sudo[139929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-womorrmfkfneicubtezraerizijridxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928559.3642805-491-30495232164883/AnsiballZ_copy.py'
Dec 05 09:56:00 compute-0 sudo[139929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:00 compute-0 python3.9[139931]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928559.3642805-491-30495232164883/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=83eaffa82a3ff311eab3c308bf8f87499cdb32e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:00.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:00 compute-0 sudo[139929]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:01 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:01 compute-0 ceph-mon[74418]: pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 492 B/s wr, 2 op/s
Dec 05 09:56:01 compute-0 sudo[140082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epemkfglcnkpqdtvkitzuzkfeuguyntv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928561.0500243-641-239074265092978/AnsiballZ_file.py'
Dec 05 09:56:01 compute-0 sudo[140082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:01 compute-0 python3.9[140084]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:56:01 compute-0 sudo[140082]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:01 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:01.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:01 compute-0 sudo[140234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkmeczfvxmemzbolthaisnknfmlvvjnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928561.6670594-675-277637251393340/AnsiballZ_stat.py'
Dec 05 09:56:01 compute-0 sudo[140234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 492 B/s wr, 2 op/s
Dec 05 09:56:02 compute-0 python3.9[140236]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:02 compute-0 sudo[140234]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:02 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:02.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:02 compute-0 sudo[140359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jckmxignatglxivwsiqngxpgkkfsuisc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928561.6670594-675-277637251393340/AnsiballZ_copy.py'
Dec 05 09:56:02 compute-0 sudo[140359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:02 compute-0 python3.9[140361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928561.6670594-675-277637251393340/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=707b529c46d00ae67cf5e28b4fee780ec58089b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:02 compute-0 sudo[140359]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:03 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:03 compute-0 sudo[140511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyilzutmvjkcasepwnyjrljtoiloufao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928562.905792-733-12900988703443/AnsiballZ_file.py'
Dec 05 09:56:03 compute-0 sudo[140511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:03 compute-0 ceph-mon[74418]: pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 492 B/s wr, 2 op/s
Dec 05 09:56:03 compute-0 python3.9[140513]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:56:03 compute-0 sudo[140511]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:03 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003cf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:03.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:03 compute-0 sudo[140663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lppnifvewflnlnvtmblrqblokehhwvso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928563.5360842-758-43850682787494/AnsiballZ_stat.py'
Dec 05 09:56:03 compute-0 sudo[140663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:56:03 compute-0 python3.9[140665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:04 compute-0 sudo[140663]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:04 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:04 compute-0 sudo[140788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axlrebkrjjpqpavawrkryatytruppezh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928563.5360842-758-43850682787494/AnsiballZ_copy.py'
Dec 05 09:56:04 compute-0 sudo[140788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:04.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:56:04 compute-0 python3.9[140790]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928563.5360842-758-43850682787494/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=707b529c46d00ae67cf5e28b4fee780ec58089b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:04 compute-0 sudo[140788]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:05 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:05 compute-0 sudo[140940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqkalcxeaqcbvmfzzdcxursdkghixuzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928564.7961457-806-49031468779036/AnsiballZ_file.py'
Dec 05 09:56:05 compute-0 sudo[140940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:05 compute-0 ceph-mon[74418]: pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 09:56:05 compute-0 python3.9[140942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:56:05 compute-0 sudo[140940]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:05 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:05.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:05] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec 05 09:56:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:05] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec 05 09:56:05 compute-0 sudo[141092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdcajitysbxxzvomnmbhbctskxrravql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928565.4890833-830-36174677145701/AnsiballZ_stat.py'
Dec 05 09:56:05 compute-0 sudo[141092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:05 compute-0 python3.9[141094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:05 compute-0 sudo[141092]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:56:06 compute-0 sudo[141217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfxszikhrjquauwbzciuyetbxxfmgeht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928565.4890833-830-36174677145701/AnsiballZ_copy.py'
Dec 05 09:56:06 compute-0 sudo[141217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:06.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:06 compute-0 python3.9[141219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928565.4890833-830-36174677145701/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=707b529c46d00ae67cf5e28b4fee780ec58089b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:06 compute-0 sudo[141217]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:06 compute-0 sudo[141297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:56:06 compute-0 sudo[141297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:56:06 compute-0 sudo[141297]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:06 compute-0 sudo[141394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isdrrvxsbaipfrgtygtdfslrphtatwvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928566.6788855-875-116435364114991/AnsiballZ_file.py'
Dec 05 09:56:06 compute-0 sudo[141394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:06.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:56:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:07 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:07 compute-0 ceph-mon[74418]: pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:56:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:07 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:07 compute-0 python3.9[141396]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:56:07 compute-0 sudo[141394]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:07.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:56:08 compute-0 sudo[141547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adgfusdgnubucuvuwifxwqkvscddnsrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928567.7858212-898-62910232969339/AnsiballZ_stat.py'
Dec 05 09:56:08 compute-0 sudo[141547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:08 compute-0 python3.9[141549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:08 compute-0 sudo[141547]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:08 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:08 compute-0 sudo[141671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzwhmumfrqirsdmodkyhhwodfeplzinh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928567.7858212-898-62910232969339/AnsiballZ_copy.py'
Dec 05 09:56:08 compute-0 sudo[141671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:08 compute-0 python3.9[141673]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928567.7858212-898-62910232969339/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=707b529c46d00ae67cf5e28b4fee780ec58089b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:08.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:56:08 compute-0 sudo[141671]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:09 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:09 compute-0 sudo[141823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzgizgvxrlickkpnvrehcjdrgxcbgxhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928569.1074045-948-207718101973408/AnsiballZ_file.py'
Dec 05 09:56:09 compute-0 sudo[141823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:09 compute-0 ceph-mon[74418]: pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:56:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:56:09 compute-0 python3.9[141825]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:56:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:09 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:09 compute-0 sudo[141823]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:09.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:56:10 compute-0 sudo[141978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-herenxfcdwkryfqjqaibklorfzvnvvvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928569.750752-971-217562774416119/AnsiballZ_stat.py'
Dec 05 09:56:10 compute-0 sudo[141978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:10 compute-0 python3.9[141980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:10 compute-0 sudo[141978]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:10 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:10.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:10 compute-0 sudo[142103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrhomjlnmrkorvkfnypczhohzogptijk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928569.750752-971-217562774416119/AnsiballZ_copy.py'
Dec 05 09:56:10 compute-0 sudo[142103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:10 compute-0 python3.9[142105]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928569.750752-971-217562774416119/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=707b529c46d00ae67cf5e28b4fee780ec58089b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:10 compute-0 sudo[142103]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:11 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f0001240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:11 compute-0 sudo[142255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmanychlzbrzxhvmvdflqfxbwgbdbifc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928571.0441382-1018-55248952840910/AnsiballZ_file.py'
Dec 05 09:56:11 compute-0 sudo[142255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:11 compute-0 ceph-mon[74418]: pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:56:11 compute-0 python3.9[142257]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:56:11 compute-0 sudo[142255]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:11 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:11.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:11 compute-0 sudo[142407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovkicxnpwhhwqlfpeamvsjgqinrairob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928571.7299705-1042-269273212282822/AnsiballZ_stat.py'
Dec 05 09:56:12 compute-0 sudo[142407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:12 compute-0 python3.9[142409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:12 compute-0 sudo[142407]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:12 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:12.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:56:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:56:12 compute-0 sudo[142532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcqymilvnfygjosspvxysrfdhtirwqph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928571.7299705-1042-269273212282822/AnsiballZ_copy.py'
Dec 05 09:56:12 compute-0 sudo[142532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:12 compute-0 python3.9[142534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928571.7299705-1042-269273212282822/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=707b529c46d00ae67cf5e28b4fee780ec58089b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:12 compute-0 sudo[142532]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:13 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:13 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f0001240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:13.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:13 compute-0 ceph-mon[74418]: pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:56:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:14 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:14.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:56:14 compute-0 ceph-mon[74418]: pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:15 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:15 compute-0 sshd-session[135675]: Connection closed by 192.168.122.30 port 60858
Dec 05 09:56:15 compute-0 sshd-session[135672]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:56:15 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec 05 09:56:15 compute-0 systemd[1]: session-48.scope: Consumed 23.431s CPU time.
Dec 05 09:56:15 compute-0 systemd-logind[789]: Session 48 logged out. Waiting for processes to exit.
Dec 05 09:56:15 compute-0 systemd-logind[789]: Removed session 48.
Dec 05 09:56:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:15 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:15.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:15] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec 05 09:56:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:15] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec 05 09:56:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:16 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f0002130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:56:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:16.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:56:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:16.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:56:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:16.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:56:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:16.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:56:17 compute-0 ceph-mon[74418]: pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:17 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:17 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:17.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:18 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:18.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:18.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:56:19 compute-0 ceph-mon[74418]: pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:19 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f0002130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:19 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:19.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:56:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:56:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:20 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:20.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:20 compute-0 sshd-session[142565]: Connection closed by authenticating user root 87.120.191.21 port 39180 [preauth]
Dec 05 09:56:21 compute-0 ceph-mon[74418]: pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:56:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:21 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:21 compute-0 sshd-session[142569]: Accepted publickey for zuul from 192.168.122.30 port 48468 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:56:21 compute-0 systemd-logind[789]: New session 49 of user zuul.
Dec 05 09:56:21 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec 05 09:56:21 compute-0 sshd-session[142569]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:56:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:21 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f0002130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:21.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:21 compute-0 sudo[142722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwvuwjfbcmpwhplenrymqwvtkxoavfsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928581.2958667-26-136245845998181/AnsiballZ_file.py'
Dec 05 09:56:21 compute-0 sudo[142722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:21 compute-0 python3.9[142724]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:22 compute-0 sudo[142722]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:22 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:56:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:22.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:56:22 compute-0 sudo[142876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygstvsqhlwgtwbitmqrhntvngdpmlgjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928582.214792-62-73928702101233/AnsiballZ_stat.py'
Dec 05 09:56:22 compute-0 sudo[142876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:22 compute-0 python3.9[142878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:22 compute-0 sudo[142876]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:23 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:23 compute-0 sudo[142999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gweogvvackzrkkhlcpfrkuzxungjdeoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928582.214792-62-73928702101233/AnsiballZ_copy.py'
Dec 05 09:56:23 compute-0 sudo[142999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:23 compute-0 python3.9[143001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764928582.214792-62-73928702101233/.source.conf _original_basename=ceph.conf follow=False checksum=d04abb6ae1f8e91ca71ced05ddf296d068b094d5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:23 compute-0 ceph-mon[74418]: pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:23 compute-0 sudo[142999]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:23 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:23.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:23 compute-0 sudo[143151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsowkjvezjhdvwpsrstwzhxvkvlfubaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928583.7082546-62-144040335144640/AnsiballZ_stat.py'
Dec 05 09:56:23 compute-0 sudo[143151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:24 compute-0 python3.9[143153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:24 compute-0 sudo[143151]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:24 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:24.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:24 compute-0 ceph-mon[74418]: pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:56:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:25 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:25 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:25.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:25] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec 05 09:56:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:25] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec 05 09:56:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:26 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:26.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:26 compute-0 sudo[143278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvqwdngkuqfofbqkuobkolmbmhqvzvkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928583.7082546-62-144040335144640/AnsiballZ_copy.py'
Dec 05 09:56:26 compute-0 sudo[143278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:26 compute-0 python3.9[143280]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764928583.7082546-62-144040335144640/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=e07228101fefffc0e2e19f022990975c2f351480 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:26 compute-0 sudo[143278]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:26 compute-0 sudo[143305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:56:26 compute-0 sudo[143305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:56:26 compute-0 sudo[143305]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:27.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:56:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:27 compute-0 sshd-session[142572]: Connection closed by 192.168.122.30 port 48468
Dec 05 09:56:27 compute-0 sshd-session[142569]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:56:27 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec 05 09:56:27 compute-0 systemd[1]: session-49.scope: Consumed 2.726s CPU time.
Dec 05 09:56:27 compute-0 systemd-logind[789]: Session 49 logged out. Waiting for processes to exit.
Dec 05 09:56:27 compute-0 systemd-logind[789]: Removed session 49.
Dec 05 09:56:27 compute-0 ceph-mon[74418]: pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:56:27
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.meta', 'images', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', '.nfs', 'cephfs.cephfs.meta']
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:56:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:56:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:56:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:56:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:27.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:56:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:56:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:28 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:28.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:28.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:56:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:29 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:29 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:29.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:29 compute-0 ceph-mon[74418]: pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:56:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:56:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:30 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:30.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:30 compute-0 ceph-mon[74418]: pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:56:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:31 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:31 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:31.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:32 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:56:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:32.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:56:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:33 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:33 compute-0 ceph-mon[74418]: pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:33 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:33.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:34 compute-0 sshd-session[143336]: Accepted publickey for zuul from 192.168.122.30 port 48600 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:56:34 compute-0 systemd-logind[789]: New session 50 of user zuul.
Dec 05 09:56:34 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec 05 09:56:34 compute-0 sshd-session[143336]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:56:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:34 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:34.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:56:35 compute-0 python3.9[143491]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:56:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:35 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:35 compute-0 ceph-mon[74418]: pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:35 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:35] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:56:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:35] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:56:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:35.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:36 compute-0 sudo[143645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noleunzutkhbwsujlnulktpbvdvnexop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928595.6177719-62-131991680938461/AnsiballZ_file.py'
Dec 05 09:56:36 compute-0 sudo[143645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:36 compute-0 python3.9[143647]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:56:36 compute-0 sudo[143645]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:36 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:56:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:36.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:56:36 compute-0 sudo[143799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pplpdjptwadytvurzlugvfaiyvhgefpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928596.3940933-62-19409667985687/AnsiballZ_file.py'
Dec 05 09:56:36 compute-0 sudo[143799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:36 compute-0 python3.9[143801]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:56:36 compute-0 sudo[143799]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:36 compute-0 ceph-mon[74418]: pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:37.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:56:37 compute-0 ceph-mgr[74711]: [dashboard INFO request] [192.168.122.100:59746] [POST] [200] [0.005s] [4.0B] [a2442dd3-7a2f-428d-b8b8-123229a7dea5] /api/prometheus_receiver
Dec 05 09:56:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:37 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:37 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:37.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:37 compute-0 python3.9[143951]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:56:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:38 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:38.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:38 compute-0 sudo[144103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzqfipzdjuvtpqzcivorbsojkzedoblz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928597.9956744-131-162346179873608/AnsiballZ_seboolean.py'
Dec 05 09:56:38 compute-0 sudo[144103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:38.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:56:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:38.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:56:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:38.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:56:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:39 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:39 compute-0 python3.9[144105]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 05 09:56:39 compute-0 ceph-mon[74418]: pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:39 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:39.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:56:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:56:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:40 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:40.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:40 compute-0 ceph-mon[74418]: pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:56:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:41 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:41 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:41.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:42 compute-0 sudo[144103]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:42 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:56:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:42.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:56:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:56:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:56:42 compute-0 sudo[144265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltqrfjatdsnerhmdcwqfeajjrjuwqafq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928602.4707706-161-277311247226343/AnsiballZ_setup.py'
Dec 05 09:56:42 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 05 09:56:42 compute-0 sudo[144265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:43 compute-0 python3.9[144267]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:56:43 compute-0 ceph-mon[74418]: pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:56:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:43 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:43 compute-0 sudo[144265]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:43 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:56:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:43.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:56:43 compute-0 sudo[144349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uptpazkydavpsilgidvehnifwvnzqgds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928602.4707706-161-277311247226343/AnsiballZ_dnf.py'
Dec 05 09:56:43 compute-0 sudo[144349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:44 compute-0 python3.9[144351]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:56:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:44 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:44.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:56:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:45 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:45 compute-0 ceph-mon[74418]: pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:45 compute-0 sudo[144349]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:45 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:45] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:56:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:45] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:56:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:45.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:46 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:46.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:46 compute-0 sudo[144506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luabmklawkkgzvfylnidftzrbrdamtgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928605.8963497-197-51597216008874/AnsiballZ_systemd.py'
Dec 05 09:56:46 compute-0 sudo[144506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:46 compute-0 python3.9[144508]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:56:46 compute-0 sudo[144506]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:46 compute-0 sudo[144536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:56:46 compute-0 sudo[144536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:56:46 compute-0 sudo[144536]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:47 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:47 compute-0 ceph-mon[74418]: pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:47 compute-0 sudo[144686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyalfelqzzxbofqlsldrlsnvxidmbnrj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764928607.0728111-221-129723633966053/AnsiballZ_edpm_nftables_snippet.py'
Dec 05 09:56:47 compute-0 sudo[144686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:47 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:47.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:47 compute-0 python3[144688]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 05 09:56:47 compute-0 sudo[144686]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:48 compute-0 sudo[144840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjqsjaruvmmnkppnjrdpyyotychhuvha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928608.0382404-248-191228253463863/AnsiballZ_file.py'
Dec 05 09:56:48 compute-0 sudo[144840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:48 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:48.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:48 compute-0 python3.9[144842]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:48 compute-0 sudo[144840]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:48.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:56:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:49 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:49 compute-0 sudo[144992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grpzkyljhdypjwalzvfshroqkhslydwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928608.813183-272-136284983338465/AnsiballZ_stat.py'
Dec 05 09:56:49 compute-0 sudo[144992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:49 compute-0 python3.9[144994]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:49 compute-0 sudo[144992]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:49 compute-0 ceph-mon[74418]: pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:49 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:49.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:49 compute-0 sudo[145070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcmdkdzjiqghcndqzohlhhokjhkcxlkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928608.813183-272-136284983338465/AnsiballZ_file.py'
Dec 05 09:56:49 compute-0 sudo[145070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:49 compute-0 python3.9[145072]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:56:49 compute-0 sudo[145070]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:56:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:50 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:56:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:50.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:56:50 compute-0 sudo[145224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkvzrdtvomcxarnrphezejxicidqkfup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928610.3075895-308-222009809888844/AnsiballZ_stat.py'
Dec 05 09:56:50 compute-0 sudo[145224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:50 compute-0 ceph-mon[74418]: pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:56:50 compute-0 python3.9[145226]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:50 compute-0 sudo[145224]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:51 compute-0 sudo[145302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avlxeowqzajrnzzrcsfyxqgktinhremq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928610.3075895-308-222009809888844/AnsiballZ_file.py'
Dec 05 09:56:51 compute-0 sudo[145302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:51 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:51 compute-0 python3.9[145304]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.7_3ya_hl recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:51 compute-0 sudo[145302]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:51 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:56:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:51.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:56:51 compute-0 sudo[145454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbvfuatkrsvbxigkwvoakdkjcecfbxkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928611.5366127-344-67376485350021/AnsiballZ_stat.py'
Dec 05 09:56:51 compute-0 sudo[145454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:51 compute-0 python3.9[145456]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:52 compute-0 sudo[145454]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:52 compute-0 sudo[145533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eurjicygvizwhyhoknowxpsmrlqwqfry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928611.5366127-344-67376485350021/AnsiballZ_file.py'
Dec 05 09:56:52 compute-0 sudo[145533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:52 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:52 compute-0 python3.9[145535]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:52 compute-0 sudo[145533]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:52.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:53 compute-0 ceph-mon[74418]: pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:53 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:53 compute-0 sudo[145686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxptpuxavjenabliowwxxzholwxpkjdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928612.8686142-383-234891006897568/AnsiballZ_command.py'
Dec 05 09:56:53 compute-0 sudo[145686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:53 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:53.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:53 compute-0 python3.9[145688]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:56:53 compute-0 sudo[145686]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:54.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:54 compute-0 sudo[145842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzuoansihreaelsudjnermitvkazyode ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764928614.0319953-407-2581438575553/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 09:56:54 compute-0 sudo[145842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:54 compute-0 python3[145844]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 09:56:54 compute-0 sudo[145842]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:55 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:55 compute-0 sudo[145994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbfqypifsrwnorpzlsreddlffgikxmkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928614.8993793-431-244002192932972/AnsiballZ_stat.py'
Dec 05 09:56:55 compute-0 sudo[145994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:56:55 compute-0 python3.9[145996]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:55 compute-0 sudo[145994]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:55 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:55] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:56:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:56:55] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:56:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:55.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:55 compute-0 ceph-mon[74418]: pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:55 compute-0 sudo[146119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iymmalzcibberlhkvccakxfqnwzudavt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928614.8993793-431-244002192932972/AnsiballZ_copy.py'
Dec 05 09:56:55 compute-0 sudo[146119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:56 compute-0 python3.9[146121]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928614.8993793-431-244002192932972/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:56 compute-0 sudo[146119]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:56 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:56.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:56 compute-0 sudo[146273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yytrsmgrjnhgfxrfnykzvijhiokweicb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928616.3781514-476-119482659550543/AnsiballZ_stat.py'
Dec 05 09:56:56 compute-0 sudo[146273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:56 compute-0 python3.9[146275]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:56 compute-0 sudo[146273]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:57 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:57 compute-0 sudo[146361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:56:57 compute-0 sudo[146361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:56:57 compute-0 sudo[146361]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:57 compute-0 sudo[146443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trcuebcoeknxbiomlzwpnpeudomhesje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928616.3781514-476-119482659550543/AnsiballZ_copy.py'
Dec 05 09:56:57 compute-0 sudo[146443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:57 compute-0 sudo[146409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:56:57 compute-0 sudo[146409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:56:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:56:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:56:57 compute-0 python3.9[146448]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928616.3781514-476-119482659550543/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:57 compute-0 sudo[146443]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:56:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:56:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:56:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:56:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:57 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:56:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:56:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:56:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:57.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:56:57 compute-0 ceph-mon[74418]: pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:58 compute-0 sudo[146409]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:56:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:56:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:56:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:56:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 301 B/s rd, 0 op/s
Dec 05 09:56:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:56:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:56:58 compute-0 sudo[146633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqrdlflcvzfibhynrvozhnwhjbsygoeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928617.9928248-521-220907352248431/AnsiballZ_stat.py'
Dec 05 09:56:58 compute-0 sudo[146633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:58 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:56:58 compute-0 python3.9[146635]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:56:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000052s ======
Dec 05 09:56:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:56:58.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 05 09:56:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:56:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:56:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:56:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:56:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:56:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:56:58 compute-0 sudo[146633]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:56:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:56:58 compute-0 sudo[146638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:56:58 compute-0 sudo[146638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:56:58 compute-0 sudo[146638]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:58 compute-0 sudo[146683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:56:58 compute-0 sudo[146683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:56:58 compute-0 sudo[146810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voqckljxmhibzinkmadinpkospeccfrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928617.9928248-521-220907352248431/AnsiballZ_copy.py'
Dec 05 09:56:58 compute-0 sudo[146810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:56:58 compute-0 ceph-mon[74418]: pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:56:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:56:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:56:58 compute-0 ceph-mon[74418]: pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 301 B/s rd, 0 op/s
Dec 05 09:56:58 compute-0 ceph-mon[74418]: pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Dec 05 09:56:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:56:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:56:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:56:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:56:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:56:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:58.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:56:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:56:58.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:56:59 compute-0 podman[146851]: 2025-12-05 09:56:59.011349298 +0000 UTC m=+0.050000003 container create 95214c8900ab59a93ebc672a50b5779c42302c35bf14ec60d4e5fae046d0a2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_goldwasser, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 09:56:59 compute-0 python3.9[146821]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928617.9928248-521-220907352248431/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:56:59 compute-0 sudo[146810]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:59 compute-0 systemd[1]: Started libpod-conmon-95214c8900ab59a93ebc672a50b5779c42302c35bf14ec60d4e5fae046d0a2e7.scope.
Dec 05 09:56:59 compute-0 podman[146851]: 2025-12-05 09:56:58.986902972 +0000 UTC m=+0.025553697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:56:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:56:59 compute-0 podman[146851]: 2025-12-05 09:56:59.113586243 +0000 UTC m=+0.152236948 container init 95214c8900ab59a93ebc672a50b5779c42302c35bf14ec60d4e5fae046d0a2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_goldwasser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 09:56:59 compute-0 podman[146851]: 2025-12-05 09:56:59.122099912 +0000 UTC m=+0.160750627 container start 95214c8900ab59a93ebc672a50b5779c42302c35bf14ec60d4e5fae046d0a2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:56:59 compute-0 podman[146851]: 2025-12-05 09:56:59.126250343 +0000 UTC m=+0.164901068 container attach 95214c8900ab59a93ebc672a50b5779c42302c35bf14ec60d4e5fae046d0a2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:56:59 compute-0 tender_goldwasser[146867]: 167 167
Dec 05 09:56:59 compute-0 systemd[1]: libpod-95214c8900ab59a93ebc672a50b5779c42302c35bf14ec60d4e5fae046d0a2e7.scope: Deactivated successfully.
Dec 05 09:56:59 compute-0 podman[146851]: 2025-12-05 09:56:59.128339379 +0000 UTC m=+0.166990094 container died 95214c8900ab59a93ebc672a50b5779c42302c35bf14ec60d4e5fae046d0a2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_goldwasser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 09:56:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:59 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d660f031bc0971096f8ffe834f265b1ab78588f8d3fab3a3fd8787d1bea7ced-merged.mount: Deactivated successfully.
Dec 05 09:56:59 compute-0 podman[146851]: 2025-12-05 09:56:59.171445636 +0000 UTC m=+0.210096361 container remove 95214c8900ab59a93ebc672a50b5779c42302c35bf14ec60d4e5fae046d0a2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 09:56:59 compute-0 systemd[1]: libpod-conmon-95214c8900ab59a93ebc672a50b5779c42302c35bf14ec60d4e5fae046d0a2e7.scope: Deactivated successfully.
Dec 05 09:56:59 compute-0 podman[146915]: 2025-12-05 09:56:59.337285118 +0000 UTC m=+0.028550108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:56:59 compute-0 podman[146915]: 2025-12-05 09:56:59.482710152 +0000 UTC m=+0.173975132 container create 5543bc45ee6a38dae6f21d5b45edfbc11fc91f25495c90b94fffc4e94db19608 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 09:56:59 compute-0 systemd[1]: Started libpod-conmon-5543bc45ee6a38dae6f21d5b45edfbc11fc91f25495c90b94fffc4e94db19608.scope.
Dec 05 09:56:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7b333bdc98a3d3838155dde37817b456cb7c2846ad8e77b87f415ae8a310dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7b333bdc98a3d3838155dde37817b456cb7c2846ad8e77b87f415ae8a310dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7b333bdc98a3d3838155dde37817b456cb7c2846ad8e77b87f415ae8a310dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7b333bdc98a3d3838155dde37817b456cb7c2846ad8e77b87f415ae8a310dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7b333bdc98a3d3838155dde37817b456cb7c2846ad8e77b87f415ae8a310dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:56:59 compute-0 podman[146915]: 2025-12-05 09:56:59.586659582 +0000 UTC m=+0.277924542 container init 5543bc45ee6a38dae6f21d5b45edfbc11fc91f25495c90b94fffc4e94db19608 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_noyce, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:56:59 compute-0 podman[146915]: 2025-12-05 09:56:59.595915021 +0000 UTC m=+0.287179961 container start 5543bc45ee6a38dae6f21d5b45edfbc11fc91f25495c90b94fffc4e94db19608 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 09:56:59 compute-0 podman[146915]: 2025-12-05 09:56:59.600178475 +0000 UTC m=+0.291443425 container attach 5543bc45ee6a38dae6f21d5b45edfbc11fc91f25495c90b94fffc4e94db19608 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 05 09:56:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:56:59 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:56:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:56:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:56:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:56:59.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:56:59 compute-0 sudo[147067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxjegsdnpenwlnmlanvqweygjydaguhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928619.6278653-566-114410791418371/AnsiballZ_stat.py'
Dec 05 09:56:59 compute-0 sudo[147067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:56:59 compute-0 confident_noyce[146931]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:56:59 compute-0 confident_noyce[146931]: --> All data devices are unavailable
Dec 05 09:56:59 compute-0 systemd[1]: libpod-5543bc45ee6a38dae6f21d5b45edfbc11fc91f25495c90b94fffc4e94db19608.scope: Deactivated successfully.
Dec 05 09:56:59 compute-0 podman[146915]: 2025-12-05 09:56:59.954531668 +0000 UTC m=+0.645796618 container died 5543bc45ee6a38dae6f21d5b45edfbc11fc91f25495c90b94fffc4e94db19608 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:57:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a7b333bdc98a3d3838155dde37817b456cb7c2846ad8e77b87f415ae8a310dd-merged.mount: Deactivated successfully.
Dec 05 09:57:00 compute-0 podman[146915]: 2025-12-05 09:57:00.040392773 +0000 UTC m=+0.731657713 container remove 5543bc45ee6a38dae6f21d5b45edfbc11fc91f25495c90b94fffc4e94db19608 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:57:00 compute-0 systemd[1]: libpod-conmon-5543bc45ee6a38dae6f21d5b45edfbc11fc91f25495c90b94fffc4e94db19608.scope: Deactivated successfully.
Dec 05 09:57:00 compute-0 python3.9[147069]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:57:00 compute-0 sudo[146683]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:00 compute-0 sudo[147067]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:00 compute-0 sudo[147090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:00 compute-0 sudo[147090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:57:00 compute-0 sudo[147090]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 374 B/s rd, 0 op/s
Dec 05 09:57:00 compute-0 sudo[147123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:57:00 compute-0 sudo[147123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:57:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:00 compute-0 sudo[147274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmbbxxldasrrafjkjleogoobpkqlpnjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928619.6278653-566-114410791418371/AnsiballZ_copy.py'
Dec 05 09:57:00 compute-0 sudo[147274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:00.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:00 compute-0 podman[147304]: 2025-12-05 09:57:00.598861895 +0000 UTC m=+0.055906941 container create da0266619a2d511e4b0f28b296e1a8c50e92c1d5195d8956a45924133bd17514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:57:00 compute-0 python3.9[147278]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928619.6278653-566-114410791418371/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:00 compute-0 systemd[1]: Started libpod-conmon-da0266619a2d511e4b0f28b296e1a8c50e92c1d5195d8956a45924133bd17514.scope.
Dec 05 09:57:00 compute-0 sudo[147274]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:00 compute-0 podman[147304]: 2025-12-05 09:57:00.569778994 +0000 UTC m=+0.026824120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:57:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:57:00 compute-0 podman[147304]: 2025-12-05 09:57:00.720903951 +0000 UTC m=+0.177949027 container init da0266619a2d511e4b0f28b296e1a8c50e92c1d5195d8956a45924133bd17514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:57:00 compute-0 podman[147304]: 2025-12-05 09:57:00.731872386 +0000 UTC m=+0.188917462 container start da0266619a2d511e4b0f28b296e1a8c50e92c1d5195d8956a45924133bd17514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:57:00 compute-0 podman[147304]: 2025-12-05 09:57:00.73540321 +0000 UTC m=+0.192448286 container attach da0266619a2d511e4b0f28b296e1a8c50e92c1d5195d8956a45924133bd17514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 09:57:00 compute-0 compassionate_haibt[147321]: 167 167
Dec 05 09:57:00 compute-0 systemd[1]: libpod-da0266619a2d511e4b0f28b296e1a8c50e92c1d5195d8956a45924133bd17514.scope: Deactivated successfully.
Dec 05 09:57:00 compute-0 podman[147304]: 2025-12-05 09:57:00.740839386 +0000 UTC m=+0.197884442 container died da0266619a2d511e4b0f28b296e1a8c50e92c1d5195d8956a45924133bd17514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 09:57:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-eec329d19bf6c8225d2cf6451586e7f12f0548ed9de0594111d29df44e56c250-merged.mount: Deactivated successfully.
Dec 05 09:57:00 compute-0 podman[147304]: 2025-12-05 09:57:00.870905739 +0000 UTC m=+0.327950785 container remove da0266619a2d511e4b0f28b296e1a8c50e92c1d5195d8956a45924133bd17514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_haibt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 09:57:00 compute-0 systemd[1]: libpod-conmon-da0266619a2d511e4b0f28b296e1a8c50e92c1d5195d8956a45924133bd17514.scope: Deactivated successfully.
Dec 05 09:57:01 compute-0 podman[147370]: 2025-12-05 09:57:01.056288095 +0000 UTC m=+0.076752831 container create bafdc16a52350683eadab34c2c380114764072b1535eb6a96fa7f0510757e31b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:57:01 compute-0 podman[147370]: 2025-12-05 09:57:01.018821919 +0000 UTC m=+0.039286705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:57:01 compute-0 systemd[1]: Started libpod-conmon-bafdc16a52350683eadab34c2c380114764072b1535eb6a96fa7f0510757e31b.scope.
Dec 05 09:57:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:01 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fbe47fa1c088d7a94284f1cd9f9d9a8533d1b94d38efbb9ac2ebda1fb4bb50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fbe47fa1c088d7a94284f1cd9f9d9a8533d1b94d38efbb9ac2ebda1fb4bb50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fbe47fa1c088d7a94284f1cd9f9d9a8533d1b94d38efbb9ac2ebda1fb4bb50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fbe47fa1c088d7a94284f1cd9f9d9a8533d1b94d38efbb9ac2ebda1fb4bb50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:57:01 compute-0 podman[147370]: 2025-12-05 09:57:01.199723485 +0000 UTC m=+0.220188181 container init bafdc16a52350683eadab34c2c380114764072b1535eb6a96fa7f0510757e31b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:57:01 compute-0 podman[147370]: 2025-12-05 09:57:01.205699616 +0000 UTC m=+0.226164312 container start bafdc16a52350683eadab34c2c380114764072b1535eb6a96fa7f0510757e31b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:57:01 compute-0 podman[147370]: 2025-12-05 09:57:01.231366045 +0000 UTC m=+0.251830891 container attach bafdc16a52350683eadab34c2c380114764072b1535eb6a96fa7f0510757e31b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 09:57:01 compute-0 ceph-mon[74418]: pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 374 B/s rd, 0 op/s
Dec 05 09:57:01 compute-0 sudo[147520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkxghcxusrfrqtujuyllwwzrtwtpemmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928621.1251042-611-275442978850510/AnsiballZ_stat.py'
Dec 05 09:57:01 compute-0 sudo[147520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]: {
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:     "1": [
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:         {
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             "devices": [
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "/dev/loop3"
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             ],
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             "lv_name": "ceph_lv0",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             "lv_size": "21470642176",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             "name": "ceph_lv0",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             "tags": {
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.cluster_name": "ceph",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.crush_device_class": "",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.encrypted": "0",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.osd_id": "1",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.type": "block",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.vdo": "0",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:                 "ceph.with_tpm": "0"
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             },
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             "type": "block",
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:             "vg_name": "ceph_vg0"
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:         }
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]:     ]
Dec 05 09:57:01 compute-0 stoic_bhaskara[147409]: }
Dec 05 09:57:01 compute-0 systemd[1]: libpod-bafdc16a52350683eadab34c2c380114764072b1535eb6a96fa7f0510757e31b.scope: Deactivated successfully.
Dec 05 09:57:01 compute-0 podman[147370]: 2025-12-05 09:57:01.556800321 +0000 UTC m=+0.577265017 container died bafdc16a52350683eadab34c2c380114764072b1535eb6a96fa7f0510757e31b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:57:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3fbe47fa1c088d7a94284f1cd9f9d9a8533d1b94d38efbb9ac2ebda1fb4bb50-merged.mount: Deactivated successfully.
Dec 05 09:57:01 compute-0 podman[147370]: 2025-12-05 09:57:01.605752005 +0000 UTC m=+0.626216701 container remove bafdc16a52350683eadab34c2c380114764072b1535eb6a96fa7f0510757e31b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bhaskara, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 09:57:01 compute-0 systemd[1]: libpod-conmon-bafdc16a52350683eadab34c2c380114764072b1535eb6a96fa7f0510757e31b.scope: Deactivated successfully.
Dec 05 09:57:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:01 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:01 compute-0 sudo[147123]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:57:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:01.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:57:01 compute-0 sudo[147535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:01 compute-0 sudo[147535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:57:01 compute-0 sudo[147535]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 compute-0 python3.9[147522]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:57:01 compute-0 sudo[147520]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 compute-0 sudo[147560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:57:01 compute-0 sudo[147560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:57:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 374 B/s rd, 0 op/s
Dec 05 09:57:02 compute-0 podman[147674]: 2025-12-05 09:57:02.186434693 +0000 UTC m=+0.052955382 container create d3056f5ccb076eea459863bd1ce31deb3beb97d32bf9ffbc1284f28125340c9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sanderson, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 09:57:02 compute-0 systemd[1]: Started libpod-conmon-d3056f5ccb076eea459863bd1ce31deb3beb97d32bf9ffbc1284f28125340c9e.scope.
Dec 05 09:57:02 compute-0 podman[147674]: 2025-12-05 09:57:02.159051229 +0000 UTC m=+0.025571918 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:57:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:57:02 compute-0 podman[147674]: 2025-12-05 09:57:02.274141948 +0000 UTC m=+0.140662627 container init d3056f5ccb076eea459863bd1ce31deb3beb97d32bf9ffbc1284f28125340c9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 09:57:02 compute-0 podman[147674]: 2025-12-05 09:57:02.281483145 +0000 UTC m=+0.148003804 container start d3056f5ccb076eea459863bd1ce31deb3beb97d32bf9ffbc1284f28125340c9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 09:57:02 compute-0 elegant_sanderson[147739]: 167 167
Dec 05 09:57:02 compute-0 systemd[1]: libpod-d3056f5ccb076eea459863bd1ce31deb3beb97d32bf9ffbc1284f28125340c9e.scope: Deactivated successfully.
Dec 05 09:57:02 compute-0 sudo[147770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkkhbpvjuxatisfdjshssenmuedzljsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928621.1251042-611-275442978850510/AnsiballZ_copy.py'
Dec 05 09:57:02 compute-0 sudo[147770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:02 compute-0 podman[147674]: 2025-12-05 09:57:02.303465405 +0000 UTC m=+0.169986064 container attach d3056f5ccb076eea459863bd1ce31deb3beb97d32bf9ffbc1284f28125340c9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sanderson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 09:57:02 compute-0 podman[147674]: 2025-12-05 09:57:02.304249646 +0000 UTC m=+0.170770315 container died d3056f5ccb076eea459863bd1ce31deb3beb97d32bf9ffbc1284f28125340c9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sanderson, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 09:57:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:02 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0690a2a2199bdf405274cf449b1254de71ab87583c567490316e5025aa72826-merged.mount: Deactivated successfully.
Dec 05 09:57:02 compute-0 podman[147674]: 2025-12-05 09:57:02.375057667 +0000 UTC m=+0.241578336 container remove d3056f5ccb076eea459863bd1ce31deb3beb97d32bf9ffbc1284f28125340c9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sanderson, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 09:57:02 compute-0 systemd[1]: libpod-conmon-d3056f5ccb076eea459863bd1ce31deb3beb97d32bf9ffbc1284f28125340c9e.scope: Deactivated successfully.
Dec 05 09:57:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:57:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:02.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:57:02 compute-0 python3.9[147778]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928621.1251042-611-275442978850510/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:02 compute-0 sudo[147770]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 compute-0 podman[147792]: 2025-12-05 09:57:02.549481839 +0000 UTC m=+0.043418106 container create 1870937de9f101184ab47af824a4da95c6a91946b079fb759d3b9e5172dcca71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:57:02 compute-0 systemd[1]: Started libpod-conmon-1870937de9f101184ab47af824a4da95c6a91946b079fb759d3b9e5172dcca71.scope.
Dec 05 09:57:02 compute-0 podman[147792]: 2025-12-05 09:57:02.529823612 +0000 UTC m=+0.023759909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:57:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c30035113ed433204c1cf05306623215be470c97916c53a634edfc5e94b347a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c30035113ed433204c1cf05306623215be470c97916c53a634edfc5e94b347a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c30035113ed433204c1cf05306623215be470c97916c53a634edfc5e94b347a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c30035113ed433204c1cf05306623215be470c97916c53a634edfc5e94b347a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:57:02 compute-0 podman[147792]: 2025-12-05 09:57:02.712919467 +0000 UTC m=+0.206855774 container init 1870937de9f101184ab47af824a4da95c6a91946b079fb759d3b9e5172dcca71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:57:02 compute-0 podman[147792]: 2025-12-05 09:57:02.720998803 +0000 UTC m=+0.214935120 container start 1870937de9f101184ab47af824a4da95c6a91946b079fb759d3b9e5172dcca71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 09:57:02 compute-0 podman[147792]: 2025-12-05 09:57:02.725941756 +0000 UTC m=+0.219878033 container attach 1870937de9f101184ab47af824a4da95c6a91946b079fb759d3b9e5172dcca71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_turing, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:57:03 compute-0 sudo[147993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhvntrvbhhppkxktytwkkbxwxqxlkbhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928622.8698902-656-135392205887405/AnsiballZ_file.py'
Dec 05 09:57:03 compute-0 sudo[147993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:03 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc004080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:03 compute-0 python3.9[148001]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:03 compute-0 sudo[147993]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 compute-0 ceph-mon[74418]: pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 374 B/s rd, 0 op/s
Dec 05 09:57:03 compute-0 lvm[148053]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:57:03 compute-0 lvm[148053]: VG ceph_vg0 finished
Dec 05 09:57:03 compute-0 lvm[148063]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:57:03 compute-0 lvm[148063]: VG ceph_vg0 finished
Dec 05 09:57:03 compute-0 xenodochial_turing[147832]: {}
Dec 05 09:57:03 compute-0 systemd[1]: libpod-1870937de9f101184ab47af824a4da95c6a91946b079fb759d3b9e5172dcca71.scope: Deactivated successfully.
Dec 05 09:57:03 compute-0 podman[147792]: 2025-12-05 09:57:03.511188086 +0000 UTC m=+1.005124373 container died 1870937de9f101184ab47af824a4da95c6a91946b079fb759d3b9e5172dcca71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:57:03 compute-0 systemd[1]: libpod-1870937de9f101184ab47af824a4da95c6a91946b079fb759d3b9e5172dcca71.scope: Consumed 1.241s CPU time.
Dec 05 09:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c30035113ed433204c1cf05306623215be470c97916c53a634edfc5e94b347a-merged.mount: Deactivated successfully.
Dec 05 09:57:03 compute-0 podman[147792]: 2025-12-05 09:57:03.559945615 +0000 UTC m=+1.053881892 container remove 1870937de9f101184ab47af824a4da95c6a91946b079fb759d3b9e5172dcca71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_turing, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 05 09:57:03 compute-0 systemd[1]: libpod-conmon-1870937de9f101184ab47af824a4da95c6a91946b079fb759d3b9e5172dcca71.scope: Deactivated successfully.
Dec 05 09:57:03 compute-0 sudo[147560]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:57:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:57:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:57:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:03 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:03 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:57:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:03.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:03 compute-0 sudo[148080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:57:03 compute-0 sudo[148080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:57:03 compute-0 sudo[148080]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 compute-0 sudo[148228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdevrbhovjjwqpangkubvwioxwdyfgjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928623.6895134-680-123976115477608/AnsiballZ_command.py'
Dec 05 09:57:03 compute-0 sudo[148228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:04 compute-0 python3.9[148230]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:57:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 374 B/s rd, 0 op/s
Dec 05 09:57:04 compute-0 sudo[148228]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:04 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:04.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:04 compute-0 sudo[148385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppopkpcuqzjbvqvdvhiduhgxqndklouw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928624.478028-704-122551491169436/AnsiballZ_blockinfile.py'
Dec 05 09:57:04 compute-0 sudo[148385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:05 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:57:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:57:05 compute-0 ceph-mon[74418]: pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 374 B/s rd, 0 op/s
Dec 05 09:57:05 compute-0 python3.9[148387]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:05 compute-0 sudo[148385]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:05 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc0040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:05] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Dec 05 09:57:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:05] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Dec 05 09:57:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:05.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:05 compute-0 sudo[148537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnfmhxkuxndbirvrvteenzdnvknaaofn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928625.5154834-731-253625215750179/AnsiballZ_command.py'
Dec 05 09:57:05 compute-0 sudo[148537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:06 compute-0 python3.9[148539]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:57:06 compute-0 sudo[148537]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 374 B/s rd, 0 op/s
Dec 05 09:57:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc0040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:06.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:06 compute-0 sudo[148692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whexeqylrwthximwfhtryrzihorxxhjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928626.3864408-755-158065954988932/AnsiballZ_stat.py'
Dec 05 09:57:06 compute-0 sudo[148692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:06 compute-0 ceph-mon[74418]: pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 374 B/s rd, 0 op/s
Dec 05 09:57:06 compute-0 python3.9[148694]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:57:06 compute-0 sudo[148692]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:07 compute-0 sudo[148721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:57:07 compute-0 sudo[148721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:57:07 compute-0 sudo[148721]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:07 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:07 compute-0 sudo[148871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgmhwldcdocwwplxtqpynypkzlxcljkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928627.222432-779-222332110383386/AnsiballZ_command.py'
Dec 05 09:57:07 compute-0 sudo[148871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:07 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:07 compute-0 python3.9[148873]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:57:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:07 compute-0 sudo[148871]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Dec 05 09:57:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:08 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc0040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:08 compute-0 sudo[149028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aocdusmrldgghhiiiqrltxoagzguewym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928628.1752894-803-157388639786610/AnsiballZ_file.py'
Dec 05 09:57:08 compute-0 sudo[149028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:08.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:08 compute-0 python3.9[149030]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:08 compute-0 sudo[149028]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:57:08.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:57:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:09 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:09 compute-0 ceph-mon[74418]: pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Dec 05 09:57:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:09 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:09.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:09 compute-0 python3.9[149180]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:57:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:57:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:10 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:10.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:11 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc0040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:11 compute-0 sudo[149333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elhqmhontfesqrqolkblgnumdkydfsge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928630.9736614-923-257450310210541/AnsiballZ_command.py'
Dec 05 09:57:11 compute-0 sudo[149333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:11 compute-0 ceph-mon[74418]: pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:57:11 compute-0 python3.9[149335]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:57:11 compute-0 ovs-vsctl[149336]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 05 09:57:11 compute-0 sudo[149333]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:11 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:11.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:12 compute-0 sudo[149487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgbzogkcdmkoqsspccgqhjyuqxtruide ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928631.8243604-950-246617654749298/AnsiballZ_command.py'
Dec 05 09:57:12 compute-0 sudo[149487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:12 compute-0 python3.9[149489]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:57:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:12 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:12 compute-0 sudo[149487]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:12.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:57:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:57:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:57:12 compute-0 sudo[149643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmpotppuqxmwastmkxlvekljhjcfvywy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928632.5398326-974-107654801407491/AnsiballZ_command.py'
Dec 05 09:57:12 compute-0 sudo[149643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:12 compute-0 python3.9[149645]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:57:12 compute-0 ovs-vsctl[149646]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 05 09:57:13 compute-0 sudo[149643]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:13 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:13 compute-0 ceph-mon[74418]: pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:57:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:13 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc0040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:13 compute-0 python3.9[149796]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:57:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:13.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:14 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:14.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:57:14 compute-0 sudo[149950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oemxcrxbqdiywxaxflzdoawuepsaypuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928634.2971504-1025-34451847881841/AnsiballZ_file.py'
Dec 05 09:57:14 compute-0 sudo[149950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:14 compute-0 python3.9[149952]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:57:14 compute-0 sudo[149950]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:15 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:15 compute-0 sudo[150102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecrcfmgidlqspnpsrrymrdzsrtkjmkqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928635.082176-1049-83269841130120/AnsiballZ_stat.py'
Dec 05 09:57:15 compute-0 sudo[150102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:15 compute-0 ceph-mon[74418]: pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:15 compute-0 python3.9[150104]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:57:15 compute-0 sudo[150102]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:15 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:15] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec 05 09:57:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:15] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec 05 09:57:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:15.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:57:15 compute-0 sudo[150180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmmrakneffqcagvkacclhhgueunfmoam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928635.082176-1049-83269841130120/AnsiballZ_file.py'
Dec 05 09:57:15 compute-0 sudo[150180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:15 compute-0 python3.9[150182]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:57:15 compute-0 sudo[150180]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:16 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc004100 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:16 compute-0 sudo[150334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avlckfzjagzwlgxftsavkiekygjdsmya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928636.1430464-1049-202143266016878/AnsiballZ_stat.py'
Dec 05 09:57:16 compute-0 sudo[150334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:57:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:16.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:57:16 compute-0 python3.9[150336]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:57:16 compute-0 sudo[150334]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 compute-0 sudo[150412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxmkzlkolwvydnxrubzcrddizdmmbpml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928636.1430464-1049-202143266016878/AnsiballZ_file.py'
Dec 05 09:57:16 compute-0 sudo[150412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:17 compute-0 python3.9[150414]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:57:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:17 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:17 compute-0 sudo[150412]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:17 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:57:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:17.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:57:17 compute-0 sudo[150564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctaavitlatinnlpkzfxadklcyzbodkof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928637.500213-1118-12026056718119/AnsiballZ_file.py'
Dec 05 09:57:17 compute-0 sudo[150564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:18 compute-0 python3.9[150566]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:18 compute-0 sudo[150564]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:18 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:18 compute-0 ceph-mon[74418]: pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:57:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:18.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:57:18 compute-0 sudo[150718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyifuggsvkyygwfcbuhdqunwjfullham ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928638.5370896-1142-114911566292146/AnsiballZ_stat.py'
Dec 05 09:57:18 compute-0 sudo[150718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:57:18.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:57:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:57:18.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:57:18 compute-0 python3.9[150720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:57:19 compute-0 sudo[150718]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:19 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:19 compute-0 sudo[150796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foimwbszjylbwwcqbcmpeekalynuaztd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928638.5370896-1142-114911566292146/AnsiballZ_file.py'
Dec 05 09:57:19 compute-0 sudo[150796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:19 compute-0 python3.9[150798]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:19 compute-0 ceph-mon[74418]: pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:19 compute-0 sudo[150796]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:19 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:19.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:57:20 compute-0 sudo[150949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbgmwqufkqhcbliwhxhupywicctzdtjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928639.9281316-1178-189076660407142/AnsiballZ_stat.py'
Dec 05 09:57:20 compute-0 sudo[150949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:20 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:20 compute-0 python3.9[150951]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:57:20 compute-0 sudo[150949]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:57:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:20.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:57:20 compute-0 sudo[151028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioabfxlmhzclgsyevyxedbcfyytmrkem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928639.9281316-1178-189076660407142/AnsiballZ_file.py'
Dec 05 09:57:20 compute-0 sudo[151028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:20 compute-0 ceph-mon[74418]: pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:57:20 compute-0 python3.9[151030]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:20 compute-0 sudo[151028]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:21 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:21 compute-0 sudo[151180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rizjesxlnrppduaiesidkldrrrjsoath ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928641.1760113-1214-188836767742906/AnsiballZ_systemd.py'
Dec 05 09:57:21 compute-0 sudo[151180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:21 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:21.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:57:21 compute-0 python3.9[151182]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:57:21 compute-0 systemd[1]: Reloading.
Dec 05 09:57:21 compute-0 systemd-rc-local-generator[151206]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:57:21 compute-0 systemd-sysv-generator[151212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:57:22 compute-0 sudo[151180]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:22 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:22.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:22 compute-0 sudo[151372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jijfsesomrndeqrhubdkdjirknkchftf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928642.389984-1238-257674594852601/AnsiballZ_stat.py'
Dec 05 09:57:22 compute-0 sudo[151372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:22 compute-0 python3.9[151374]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:57:23 compute-0 sudo[151372]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:23 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:23 compute-0 sudo[151450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbqabjmvqvubnktqzjqpzbpwjmhxozbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928642.389984-1238-257674594852601/AnsiballZ_file.py'
Dec 05 09:57:23 compute-0 sudo[151450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:23 compute-0 python3.9[151452]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:23 compute-0 sudo[151450]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:23 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:23.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:23 compute-0 ceph-mon[74418]: pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:24 compute-0 sudo[151603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnbluhtcdaozncdkikwrbphumqcflufg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928643.9579217-1274-136006944721743/AnsiballZ_stat.py'
Dec 05 09:57:24 compute-0 sudo[151603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:24 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:24 compute-0 python3.9[151606]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:57:24 compute-0 sudo[151603]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:24.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:24 compute-0 sudo[151682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmlmritjhgctsxatzjdbhwurgtagnden ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928643.9579217-1274-136006944721743/AnsiballZ_file.py'
Dec 05 09:57:24 compute-0 sudo[151682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:24 compute-0 ceph-mon[74418]: pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:24 compute-0 python3.9[151684]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:24 compute-0 sudo[151682]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:25 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:25] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec 05 09:57:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:25] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec 05 09:57:25 compute-0 sudo[151836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgiojvovrsulehnfivvtoeupnbanqyom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928645.348299-1310-231632517341105/AnsiballZ_systemd.py'
Dec 05 09:57:25 compute-0 sudo[151836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:25 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:25.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:25 compute-0 python3.9[151838]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:57:25 compute-0 systemd[1]: Reloading.
Dec 05 09:57:26 compute-0 systemd-rc-local-generator[151867]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:57:26 compute-0 systemd-sysv-generator[151871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:57:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:26 compute-0 systemd[1]: Starting Create netns directory...
Dec 05 09:57:26 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 09:57:26 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 09:57:26 compute-0 systemd[1]: Finished Create netns directory.
Dec 05 09:57:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:26 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:26 compute-0 sudo[151836]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:26.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:27 compute-0 sudo[151959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:57:27 compute-0 sudo[151959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:57:27 compute-0 sudo[151959]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:27 compute-0 ceph-mon[74418]: pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:27 compute-0 sudo[152056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syzlcbfqqhhvhibwgstfwpriapyhdnjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928646.9097493-1340-223224372219799/AnsiballZ_file.py'
Dec 05 09:57:27 compute-0 sudo[152056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:57:27
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.nfs', 'default.rgw.log', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'vms', 'backups', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.rgw.root', '.mgr']
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:57:27 compute-0 python3.9[152058]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:57:27 compute-0 sudo[152056]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:57:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:57:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:57:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:57:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:27.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:57:28 compute-0 sudo[152209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcikiiofuhmwwtedimizhpuzqilvnmhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928647.895177-1364-122799752599564/AnsiballZ_stat.py'
Dec 05 09:57:28 compute-0 sudo[152209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:28 compute-0 python3.9[152211]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:57:28 compute-0 sudo[152209]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:28 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:28.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:57:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:57:28 compute-0 sudo[152333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlhqzsmusdodtmkshesjklsyuaaanfpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928647.895177-1364-122799752599564/AnsiballZ_copy.py'
Dec 05 09:57:28 compute-0 sudo[152333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:28 compute-0 python3.9[152335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928647.895177-1364-122799752599564/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:57:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:57:28.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:57:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:57:28.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:57:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:57:28.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:57:28 compute-0 sudo[152333]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:29 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:29 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:29.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:29 compute-0 ceph-mon[74418]: pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:57:29 compute-0 sudo[152485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uesjjvpizdmjpwuxzgjeumyejuimzwww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928649.6385322-1415-100862857340274/AnsiballZ_file.py'
Dec 05 09:57:29 compute-0 sudo[152485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:30 compute-0 python3.9[152487]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:57:30 compute-0 sudo[152485]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:30 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:30.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:30 compute-0 sudo[152639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scicmgytpkmjosthyfynswvvsfuxrboc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928650.353561-1439-199262322040470/AnsiballZ_stat.py'
Dec 05 09:57:30 compute-0 sudo[152639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:30 compute-0 python3.9[152641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:57:30 compute-0 sudo[152639]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:30 compute-0 ceph-mon[74418]: pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:31 compute-0 sudo[152762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whqlusobusvboihgmqfxixcyiqqhquav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928650.353561-1439-199262322040470/AnsiballZ_copy.py'
Dec 05 09:57:31 compute-0 sudo[152762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:31 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:31 compute-0 python3.9[152764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764928650.353561-1439-199262322040470/.source.json _original_basename=.5_a5i38e follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:31 compute-0 sudo[152762]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:31 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:31.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:32 compute-0 sudo[152915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnmuerlrqvfxfzyykxaizpqkdmncbnar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928651.8819582-1484-40939469412207/AnsiballZ_file.py'
Dec 05 09:57:32 compute-0 sudo[152915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:32 compute-0 python3.9[152917]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:32 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:32 compute-0 sudo[152915]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:32.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:32 compute-0 sudo[153068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqzfhaloqvotmqswrziaycajdatrdxor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928652.6618578-1508-212212946099398/AnsiballZ_stat.py'
Dec 05 09:57:32 compute-0 sudo[153068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:33 compute-0 sudo[153068]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:33 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:33 compute-0 ceph-mon[74418]: pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:33 compute-0 sudo[153191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyufyzsulsvedurhcbicecoyqdcyoiay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928652.6618578-1508-212212946099398/AnsiballZ_copy.py'
Dec 05 09:57:33 compute-0 sudo[153191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:33 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:33.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:57:33 compute-0 sudo[153191]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:34 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:57:34 compute-0 sudo[153345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atjaroeorkphxglypemolqilqeuhwcet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928654.1534288-1559-232391829854495/AnsiballZ_container_config_data.py'
Dec 05 09:57:34 compute-0 sudo[153345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:34 compute-0 ceph-mon[74418]: pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:34 compute-0 python3.9[153347]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 05 09:57:34 compute-0 sudo[153345]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:35 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:35 compute-0 sudo[153497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njlspqdtueatntnygodyyjzzfhjmybij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928655.1116536-1586-105115294237087/AnsiballZ_container_config_hash.py'
Dec 05 09:57:35 compute-0 sudo[153497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:35] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:57:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:35] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec 05 09:57:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:35 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:35.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:35 compute-0 python3.9[153499]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:57:35 compute-0 sudo[153497]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:36 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:36 compute-0 sudo[153651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqqqgtffvekzuatefueitbmynmnfnhby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928656.244762-1613-34171332288185/AnsiballZ_podman_container_info.py'
Dec 05 09:57:36 compute-0 sudo[153651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:36 compute-0 python3.9[153653]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 09:57:37 compute-0 sudo[153651]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:37 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e80043f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:37 compute-0 ceph-mon[74418]: pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095737 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:57:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:37 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:37.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:38 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:38.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:38 compute-0 sudo[153832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbywntfabosgjbtbjwsvwitljcvxtlpe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764928658.1384075-1652-110950508693340/AnsiballZ_edpm_container_manage.py'
Dec 05 09:57:38 compute-0 sudo[153832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:57:38.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:57:38 compute-0 python3[153834]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:57:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:39 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:39 compute-0 ceph-mon[74418]: pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:39 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:39.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:40 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:41 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700002c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:41 compute-0 ceph-mon[74418]: pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 09:57:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:41 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:41.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:57:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:42 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:57:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:57:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:43 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:43 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17000040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:43.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:43 compute-0 ceph-mon[74418]: pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:57:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:57:44 compute-0 podman[153849]: 2025-12-05 09:57:44.042344818 +0000 UTC m=+5.051276683 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec 05 09:57:44 compute-0 podman[153974]: 2025-12-05 09:57:44.175188536 +0000 UTC m=+0.046058204 container create 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:57:44 compute-0 podman[153974]: 2025-12-05 09:57:44.150353045 +0000 UTC m=+0.021222733 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec 05 09:57:44 compute-0 python3[153834]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec 05 09:57:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:57:44 compute-0 sudo[153832]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:44 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17000040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:44.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:45 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e8004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:45] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 09:57:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:45] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 09:57:45 compute-0 ceph-mon[74418]: pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:57:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:45 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:45.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:45 compute-0 sudo[154163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzkuxccwhhfuvnbovnekkbqzsyoxqtxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928665.272169-1676-59519287257250/AnsiballZ_stat.py'
Dec 05 09:57:45 compute-0 sudo[154163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:46 compute-0 python3.9[154165]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:57:46 compute-0 sudo[154163]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:57:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:46 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17000040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:57:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:46.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:57:46 compute-0 sudo[154319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opamojibknsqhosswmnvbjxjnghrgvlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928666.6365483-1703-34425077131841/AnsiballZ_file.py'
Dec 05 09:57:46 compute-0 sudo[154319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:47 compute-0 ceph-mon[74418]: pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:57:47 compute-0 python3.9[154321]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:47 compute-0 sudo[154319]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:47 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:47 compute-0 sudo[154322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:57:47 compute-0 sudo[154322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:57:47 compute-0 sudo[154322]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:47 compute-0 sudo[154420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgyolzrlawumxhvxrxdstjnwofgrmiyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928666.6365483-1703-34425077131841/AnsiballZ_stat.py'
Dec 05 09:57:47 compute-0 sudo[154420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:47 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:57:47 compute-0 python3.9[154422]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:57:47 compute-0 sudo[154420]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:47 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:47.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:48 compute-0 sudo[154572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbsjjiiyuswkwidfragjhmamcnvpoece ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928667.6421578-1703-34014869952764/AnsiballZ_copy.py'
Dec 05 09:57:48 compute-0 sudo[154572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:57:48 compute-0 python3.9[154574]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764928667.6421578-1703-34014869952764/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:48 compute-0 sudo[154572]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:48 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:48 compute-0 sudo[154649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcacwohzyhtkeolcziqiyixruavvtsyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928667.6421578-1703-34014869952764/AnsiballZ_systemd.py'
Dec 05 09:57:48 compute-0 sudo[154649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:57:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:57:48 compute-0 python3.9[154651]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:57:48 compute-0 systemd[1]: Reloading.
Dec 05 09:57:48 compute-0 systemd-rc-local-generator[154676]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:57:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:57:48.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:57:48 compute-0 systemd-sysv-generator[154680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:57:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:49 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17000040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:49 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:49.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:57:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 09:57:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:50 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e80044b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:50 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:57:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:50 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:57:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:57:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:50.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:57:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:51 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:51 compute-0 sudo[154649]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:51 compute-0 sudo[154762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaxpklzaxccbsvhmihzrhdkhtqjlvulb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928667.6421578-1703-34014869952764/AnsiballZ_systemd.py'
Dec 05 09:57:51 compute-0 sudo[154762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:51 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16f00042d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:51.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:52 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:57:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 09:57:52 compute-0 python3.9[154764]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:57:52 compute-0 systemd[1]: Reloading.
Dec 05 09:57:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:52 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17000040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:52 compute-0 ceph-mon[74418]: pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 09:57:52 compute-0 systemd-rc-local-generator[154793]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:57:52 compute-0 systemd-sysv-generator[154798]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:57:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:52.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:52 compute-0 systemd[1]: Starting ovn_controller container...
Dec 05 09:57:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:57:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b3253421b155f423c074f62a1164476c6dd253a67a63fbe5638a363804c8dc/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 05 09:57:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18.
Dec 05 09:57:52 compute-0 podman[154806]: 2025-12-05 09:57:52.801475473 +0000 UTC m=+0.157884015 container init 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 09:57:52 compute-0 ovn_controller[154822]: + sudo -E kolla_set_configs
Dec 05 09:57:52 compute-0 podman[154806]: 2025-12-05 09:57:52.827815785 +0000 UTC m=+0.184224297 container start 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:57:52 compute-0 edpm-start-podman-container[154806]: ovn_controller
Dec 05 09:57:52 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 05 09:57:52 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 05 09:57:52 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 05 09:57:52 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 05 09:57:52 compute-0 systemd[154853]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 05 09:57:52 compute-0 edpm-start-podman-container[154805]: Creating additional drop-in dependency for "ovn_controller" (7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18)
Dec 05 09:57:52 compute-0 podman[154829]: 2025-12-05 09:57:52.919353527 +0000 UTC m=+0.073868456 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 05 09:57:52 compute-0 systemd[1]: Reloading.
Dec 05 09:57:53 compute-0 systemd-rc-local-generator[154907]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:57:53 compute-0 systemd-sysv-generator[154910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:57:53 compute-0 systemd[154853]: Queued start job for default target Main User Target.
Dec 05 09:57:53 compute-0 systemd[154853]: Created slice User Application Slice.
Dec 05 09:57:53 compute-0 systemd[154853]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 05 09:57:53 compute-0 systemd[154853]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 09:57:53 compute-0 systemd[154853]: Reached target Paths.
Dec 05 09:57:53 compute-0 systemd[154853]: Reached target Timers.
Dec 05 09:57:53 compute-0 systemd[154853]: Starting D-Bus User Message Bus Socket...
Dec 05 09:57:53 compute-0 systemd[154853]: Starting Create User's Volatile Files and Directories...
Dec 05 09:57:53 compute-0 systemd[154853]: Finished Create User's Volatile Files and Directories.
Dec 05 09:57:53 compute-0 systemd[154853]: Listening on D-Bus User Message Bus Socket.
Dec 05 09:57:53 compute-0 systemd[154853]: Reached target Sockets.
Dec 05 09:57:53 compute-0 systemd[154853]: Reached target Basic System.
Dec 05 09:57:53 compute-0 systemd[154853]: Reached target Main User Target.
Dec 05 09:57:53 compute-0 systemd[154853]: Startup finished in 150ms.
Dec 05 09:57:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:53 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e80044d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:53 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 05 09:57:53 compute-0 systemd[1]: Started ovn_controller container.
Dec 05 09:57:53 compute-0 systemd[1]: 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18-13c5fce833fe16fc.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:57:53 compute-0 systemd[1]: 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18-13c5fce833fe16fc.service: Failed with result 'exit-code'.
Dec 05 09:57:53 compute-0 systemd[1]: Started Session c1 of User root.
Dec 05 09:57:53 compute-0 sudo[154762]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:53 compute-0 ovn_controller[154822]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:57:53 compute-0 ovn_controller[154822]: INFO:__main__:Validating config file
Dec 05 09:57:53 compute-0 ovn_controller[154822]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:57:53 compute-0 ovn_controller[154822]: INFO:__main__:Writing out command to execute
Dec 05 09:57:53 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 05 09:57:53 compute-0 ovn_controller[154822]: ++ cat /run_command
Dec 05 09:57:53 compute-0 ovn_controller[154822]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 05 09:57:53 compute-0 ovn_controller[154822]: + ARGS=
Dec 05 09:57:53 compute-0 ovn_controller[154822]: + sudo kolla_copy_cacerts
Dec 05 09:57:53 compute-0 systemd[1]: Started Session c2 of User root.
Dec 05 09:57:53 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 05 09:57:53 compute-0 ovn_controller[154822]: + [[ ! -n '' ]]
Dec 05 09:57:53 compute-0 ovn_controller[154822]: + . kolla_extend_start
Dec 05 09:57:53 compute-0 ovn_controller[154822]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 05 09:57:53 compute-0 ovn_controller[154822]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 05 09:57:53 compute-0 ovn_controller[154822]: + umask 0022
Dec 05 09:57:53 compute-0 ovn_controller[154822]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 05 09:57:53 compute-0 ceph-mon[74418]: pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 09:57:53 compute-0 ceph-mon[74418]: pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 05 09:57:53 compute-0 NetworkManager[48957]: <info>  [1764928673.4059] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 05 09:57:53 compute-0 NetworkManager[48957]: <info>  [1764928673.4067] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 09:57:53 compute-0 NetworkManager[48957]: <info>  [1764928673.4083] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 05 09:57:53 compute-0 NetworkManager[48957]: <info>  [1764928673.4089] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 05 09:57:53 compute-0 NetworkManager[48957]: <info>  [1764928673.4094] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 05 09:57:53 compute-0 kernel: br-int: entered promiscuous mode
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00024|main|INFO|OVS feature set changed, force recompute.
Dec 05 09:57:53 compute-0 systemd-udevd[154978]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 09:57:53 compute-0 ovn_controller[154822]: 2025-12-05T09:57:53Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 09:57:53 compute-0 NetworkManager[48957]: <info>  [1764928673.4672] manager: (ovn-d254f5-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 05 09:57:53 compute-0 NetworkManager[48957]: <info>  [1764928673.4677] manager: (ovn-235410-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Dec 05 09:57:53 compute-0 NetworkManager[48957]: <info>  [1764928673.4680] manager: (ovn-6e39f0-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec 05 09:57:53 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 05 09:57:53 compute-0 NetworkManager[48957]: <info>  [1764928673.4852] device (genev_sys_6081): carrier: link connected
Dec 05 09:57:53 compute-0 NetworkManager[48957]: <info>  [1764928673.4856] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Dec 05 09:57:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:53 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:53 compute-0 sudo[155085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytemunvxmmmbkopbyzquugpttzxkvcov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928673.4681273-1787-103941901871243/AnsiballZ_command.py'
Dec 05 09:57:53 compute-0 sudo[155085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:53.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:57:53 compute-0 python3.9[155087]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:57:53 compute-0 ovs-vsctl[155088]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 05 09:57:53 compute-0 sudo[155085]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:57:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17000040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:54 compute-0 sudo[155240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrsbotsffbicbqeocetgruancgvhnhay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928674.2569597-1811-114540064503201/AnsiballZ_command.py'
Dec 05 09:57:54 compute-0 sudo[155240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:54.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:57:54 compute-0 python3.9[155242]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:57:54 compute-0 ovs-vsctl[155244]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 05 09:57:54 compute-0 sudo[155240]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:55 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 09:57:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:55 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17000040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:55 compute-0 sudo[155395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbehikwilkkdhtevfmzxbznibfhbdkej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928675.328708-1853-268529267309310/AnsiballZ_command.py'
Dec 05 09:57:55 compute-0 sudo[155395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:57:55 compute-0 ceph-mon[74418]: pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:57:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:55] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 09:57:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:57:55] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 09:57:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:55 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e80044f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:55 compute-0 python3.9[155397]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:57:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:57:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:55.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:57:55 compute-0 ovs-vsctl[155399]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 05 09:57:55 compute-0 sudo[155395]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:57:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:56 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:56 compute-0 sshd-session[143339]: Connection closed by 192.168.122.30 port 48600
Dec 05 09:57:56 compute-0 sshd-session[143336]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:57:56 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec 05 09:57:56 compute-0 systemd[1]: session-50.scope: Consumed 56.508s CPU time.
Dec 05 09:57:56 compute-0 systemd-logind[789]: Session 50 logged out. Waiting for processes to exit.
Dec 05 09:57:56 compute-0 systemd-logind[789]: Removed session 50.
Dec 05 09:57:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:57:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:57 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:57:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:57:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:57:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:57:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:57:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:57:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:57:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:57:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:57 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002590 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:57.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:57 compute-0 ceph-mon[74418]: pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:57:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:57:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:58 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17000040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:57:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:57:58.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:57:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:57:58.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:57:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:59 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:57:59 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:57:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:57:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:57:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:57:59.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 05 09:58:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002590 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:00.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:01 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:58:01 compute-0 ceph-mon[74418]: pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:58:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:01 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002590 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095801 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:58:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:01 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:01.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 05 09:58:02 compute-0 ceph-mon[74418]: pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 05 09:58:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:02 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:02.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:03 compute-0 sshd-session[155433]: Accepted publickey for zuul from 192.168.122.30 port 49078 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:58:03 compute-0 systemd-logind[789]: New session 52 of user zuul.
Dec 05 09:58:03 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec 05 09:58:03 compute-0 sshd-session[155433]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:58:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:03 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:03 compute-0 ceph-mon[74418]: pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 05 09:58:03 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 05 09:58:03 compute-0 systemd[154853]: Activating special unit Exit the Session...
Dec 05 09:58:03 compute-0 systemd[154853]: Stopped target Main User Target.
Dec 05 09:58:03 compute-0 systemd[154853]: Stopped target Basic System.
Dec 05 09:58:03 compute-0 systemd[154853]: Stopped target Paths.
Dec 05 09:58:03 compute-0 systemd[154853]: Stopped target Sockets.
Dec 05 09:58:03 compute-0 systemd[154853]: Stopped target Timers.
Dec 05 09:58:03 compute-0 systemd[154853]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 09:58:03 compute-0 systemd[154853]: Closed D-Bus User Message Bus Socket.
Dec 05 09:58:03 compute-0 systemd[154853]: Stopped Create User's Volatile Files and Directories.
Dec 05 09:58:03 compute-0 systemd[154853]: Removed slice User Application Slice.
Dec 05 09:58:03 compute-0 systemd[154853]: Reached target Shutdown.
Dec 05 09:58:03 compute-0 systemd[154853]: Finished Exit the Session.
Dec 05 09:58:03 compute-0 systemd[154853]: Reached target Exit the Session.
Dec 05 09:58:03 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 05 09:58:03 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 05 09:58:03 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 05 09:58:03 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 05 09:58:03 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 05 09:58:03 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 05 09:58:03 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 05 09:58:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:03 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:03.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:03 compute-0 sudo[155561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:03 compute-0 sudo[155561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:03 compute-0 sudo[155561]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 compute-0 sudo[155612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:58:04 compute-0 sudo[155612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 05 09:58:04 compute-0 python3.9[155615]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:58:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:04 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:04 compute-0 sudo[155612]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:58:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:04.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:58:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:58:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:58:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:58:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:58:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 487 B/s rd, 97 B/s wr, 0 op/s
Dec 05 09:58:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:58:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:58:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:58:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:58:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:58:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:58:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:58:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:58:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:58:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:58:04 compute-0 sudo[155701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:04 compute-0 sudo[155701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:04 compute-0 sudo[155701]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 compute-0 sudo[155747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:58:04 compute-0 sudo[155747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:05 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002590 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:05 compute-0 podman[155873]: 2025-12-05 09:58:05.267797525 +0000 UTC m=+0.044516984 container create 1111e9383e32761f25f38e4b2f10e7f4527cf6ef3c3468ea5434829fcf97ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 09:58:05 compute-0 systemd[1]: Started libpod-conmon-1111e9383e32761f25f38e4b2f10e7f4527cf6ef3c3468ea5434829fcf97ac94.scope.
Dec 05 09:58:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:58:05 compute-0 sudo[155930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrwvrsterhubksnothkwbzzkcxicmmma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928684.8601964-62-184364592930976/AnsiballZ_file.py'
Dec 05 09:58:05 compute-0 podman[155873]: 2025-12-05 09:58:05.250368724 +0000 UTC m=+0.027088213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:58:05 compute-0 sudo[155930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:05 compute-0 podman[155873]: 2025-12-05 09:58:05.365491414 +0000 UTC m=+0.142210903 container init 1111e9383e32761f25f38e4b2f10e7f4527cf6ef3c3468ea5434829fcf97ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_allen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:58:05 compute-0 podman[155873]: 2025-12-05 09:58:05.374183388 +0000 UTC m=+0.150902847 container start 1111e9383e32761f25f38e4b2f10e7f4527cf6ef3c3468ea5434829fcf97ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_allen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:58:05 compute-0 wizardly_allen[155931]: 167 167
Dec 05 09:58:05 compute-0 systemd[1]: libpod-1111e9383e32761f25f38e4b2f10e7f4527cf6ef3c3468ea5434829fcf97ac94.scope: Deactivated successfully.
Dec 05 09:58:05 compute-0 podman[155873]: 2025-12-05 09:58:05.382437291 +0000 UTC m=+0.159156770 container attach 1111e9383e32761f25f38e4b2f10e7f4527cf6ef3c3468ea5434829fcf97ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Dec 05 09:58:05 compute-0 podman[155873]: 2025-12-05 09:58:05.383009137 +0000 UTC m=+0.159728596 container died 1111e9383e32761f25f38e4b2f10e7f4527cf6ef3c3468ea5434829fcf97ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:58:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-019478ca810d73bba8d3f7411ca3a9682e6c0e4f0062e040f38f4c7bb31bedf6-merged.mount: Deactivated successfully.
Dec 05 09:58:05 compute-0 podman[155873]: 2025-12-05 09:58:05.442584687 +0000 UTC m=+0.219304176 container remove 1111e9383e32761f25f38e4b2f10e7f4527cf6ef3c3468ea5434829fcf97ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_allen, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 09:58:05 compute-0 systemd[1]: libpod-conmon-1111e9383e32761f25f38e4b2f10e7f4527cf6ef3c3468ea5434829fcf97ac94.scope: Deactivated successfully.
Dec 05 09:58:05 compute-0 ceph-mon[74418]: pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 05 09:58:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:58:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:58:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:58:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:58:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:58:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:58:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:58:05 compute-0 python3.9[155935]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:05 compute-0 sudo[155930]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:05 compute-0 podman[155959]: 2025-12-05 09:58:05.618404866 +0000 UTC m=+0.052318194 container create bde5f693fb314ddf8d5b938ac2cd60d369723047623d970d729c3d7c69670638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:58:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:05] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Dec 05 09:58:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:05] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Dec 05 09:58:05 compute-0 systemd[1]: Started libpod-conmon-bde5f693fb314ddf8d5b938ac2cd60d369723047623d970d729c3d7c69670638.scope.
Dec 05 09:58:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d5731cb264a3830e76e59d2ccf2c3107a5118003109f629ae96241312d9205/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d5731cb264a3830e76e59d2ccf2c3107a5118003109f629ae96241312d9205/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d5731cb264a3830e76e59d2ccf2c3107a5118003109f629ae96241312d9205/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d5731cb264a3830e76e59d2ccf2c3107a5118003109f629ae96241312d9205/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d5731cb264a3830e76e59d2ccf2c3107a5118003109f629ae96241312d9205/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:05 compute-0 podman[155959]: 2025-12-05 09:58:05.681567692 +0000 UTC m=+0.115481020 container init bde5f693fb314ddf8d5b938ac2cd60d369723047623d970d729c3d7c69670638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leakey, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:58:05 compute-0 podman[155959]: 2025-12-05 09:58:05.595516927 +0000 UTC m=+0.029430345 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:58:05 compute-0 podman[155959]: 2025-12-05 09:58:05.691805308 +0000 UTC m=+0.125718636 container start bde5f693fb314ddf8d5b938ac2cd60d369723047623d970d729c3d7c69670638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:58:05 compute-0 podman[155959]: 2025-12-05 09:58:05.694710627 +0000 UTC m=+0.128623955 container attach bde5f693fb314ddf8d5b938ac2cd60d369723047623d970d729c3d7c69670638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:58:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:05 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002590 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:05.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:05 compute-0 sleepy_leakey[156000]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:58:05 compute-0 sleepy_leakey[156000]: --> All data devices are unavailable
Dec 05 09:58:06 compute-0 systemd[1]: libpod-bde5f693fb314ddf8d5b938ac2cd60d369723047623d970d729c3d7c69670638.scope: Deactivated successfully.
Dec 05 09:58:06 compute-0 podman[155959]: 2025-12-05 09:58:06.028852242 +0000 UTC m=+0.462765590 container died bde5f693fb314ddf8d5b938ac2cd60d369723047623d970d729c3d7c69670638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:58:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3d5731cb264a3830e76e59d2ccf2c3107a5118003109f629ae96241312d9205-merged.mount: Deactivated successfully.
Dec 05 09:58:06 compute-0 sudo[156152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhuwwnutwivuwttixpmsurqrnunboddd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928685.7415175-62-248866121449323/AnsiballZ_file.py'
Dec 05 09:58:06 compute-0 sudo[156152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:06 compute-0 podman[155959]: 2025-12-05 09:58:06.136735977 +0000 UTC m=+0.570649305 container remove bde5f693fb314ddf8d5b938ac2cd60d369723047623d970d729c3d7c69670638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 05 09:58:06 compute-0 systemd[1]: libpod-conmon-bde5f693fb314ddf8d5b938ac2cd60d369723047623d970d729c3d7c69670638.scope: Deactivated successfully.
Dec 05 09:58:06 compute-0 sudo[155747]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:06 compute-0 sudo[156155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:06 compute-0 sudo[156155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:06 compute-0 sudo[156155]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:06 compute-0 sudo[156181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:58:06 compute-0 sudo[156181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:06 compute-0 python3.9[156154]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:06 compute-0 sudo[156152]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:58:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:06.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:58:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 483 B/s rd, 96 B/s wr, 0 op/s
Dec 05 09:58:06 compute-0 podman[156344]: 2025-12-05 09:58:06.68182824 +0000 UTC m=+0.023041673 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:58:06 compute-0 sudo[156408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qurlnwztjhnhfszgvxqwsitstmtcdpkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928686.5143075-62-65433864373989/AnsiballZ_file.py'
Dec 05 09:58:06 compute-0 sudo[156408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:06 compute-0 ceph-mgr[74711]: [dashboard INFO request] [192.168.122.100:50792] [POST] [200] [0.004s] [4.0B] [009a5847-c8af-44e5-b8f1-ed81ae6bd19c] /api/prometheus_receiver
Dec 05 09:58:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:07 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:07 compute-0 sudo[156413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:58:07 compute-0 sudo[156413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:07 compute-0 sudo[156413]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:07 compute-0 python3.9[156410]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:07 compute-0 sudo[156408]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:07 compute-0 ceph-mon[74418]: pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 487 B/s rd, 97 B/s wr, 0 op/s
Dec 05 09:58:07 compute-0 podman[156344]: 2025-12-05 09:58:07.668215113 +0000 UTC m=+1.009428506 container create 2d0dcb21895e62042c2c42c3429d4c4b73e8686c4bd4c189feb89853904e36db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ishizaka, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:58:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:07 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:07.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:07 compute-0 sudo[156589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyoifiyduanxjbvtoxfprmdapqkikxuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928687.541326-62-45137681097771/AnsiballZ_file.py'
Dec 05 09:58:07 compute-0 sudo[156589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:07 compute-0 systemd[1]: Started libpod-conmon-2d0dcb21895e62042c2c42c3429d4c4b73e8686c4bd4c189feb89853904e36db.scope.
Dec 05 09:58:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:58:07 compute-0 podman[156344]: 2025-12-05 09:58:07.907850427 +0000 UTC m=+1.249063860 container init 2d0dcb21895e62042c2c42c3429d4c4b73e8686c4bd4c189feb89853904e36db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 05 09:58:07 compute-0 podman[156344]: 2025-12-05 09:58:07.917161538 +0000 UTC m=+1.258374921 container start 2d0dcb21895e62042c2c42c3429d4c4b73e8686c4bd4c189feb89853904e36db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:58:07 compute-0 podman[156344]: 2025-12-05 09:58:07.921191477 +0000 UTC m=+1.262404870 container attach 2d0dcb21895e62042c2c42c3429d4c4b73e8686c4bd4c189feb89853904e36db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:58:07 compute-0 distracted_ishizaka[156594]: 167 167
Dec 05 09:58:07 compute-0 systemd[1]: libpod-2d0dcb21895e62042c2c42c3429d4c4b73e8686c4bd4c189feb89853904e36db.scope: Deactivated successfully.
Dec 05 09:58:07 compute-0 conmon[156594]: conmon 2d0dcb21895e62042c2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d0dcb21895e62042c2c42c3429d4c4b73e8686c4bd4c189feb89853904e36db.scope/container/memory.events
Dec 05 09:58:07 compute-0 podman[156344]: 2025-12-05 09:58:07.927379074 +0000 UTC m=+1.268592487 container died 2d0dcb21895e62042c2c42c3429d4c4b73e8686c4bd4c189feb89853904e36db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ishizaka, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 05 09:58:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec5819018bc06f896dd4bb00c900b817c0301fcb350a62d655ced8e96818d64-merged.mount: Deactivated successfully.
Dec 05 09:58:07 compute-0 python3.9[156591]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:08 compute-0 podman[156344]: 2025-12-05 09:58:08.015781211 +0000 UTC m=+1.356994584 container remove 2d0dcb21895e62042c2c42c3429d4c4b73e8686c4bd4c189feb89853904e36db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 09:58:08 compute-0 sudo[156589]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:08 compute-0 systemd[1]: libpod-conmon-2d0dcb21895e62042c2c42c3429d4c4b73e8686c4bd4c189feb89853904e36db.scope: Deactivated successfully.
Dec 05 09:58:08 compute-0 podman[156648]: 2025-12-05 09:58:08.170982094 +0000 UTC m=+0.047759541 container create 45e5ad94d78158cb4cb2a572259ad8f3a8bc7447349d825fa87afa2c3001683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 09:58:08 compute-0 systemd[1]: Started libpod-conmon-45e5ad94d78158cb4cb2a572259ad8f3a8bc7447349d825fa87afa2c3001683d.scope.
Dec 05 09:58:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:58:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a561aa202c85e1d63bfb506adffaa5d7ff603db80239e0f5c0235e94fae2f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a561aa202c85e1d63bfb506adffaa5d7ff603db80239e0f5c0235e94fae2f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:08 compute-0 podman[156648]: 2025-12-05 09:58:08.150349996 +0000 UTC m=+0.027127463 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:58:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a561aa202c85e1d63bfb506adffaa5d7ff603db80239e0f5c0235e94fae2f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a561aa202c85e1d63bfb506adffaa5d7ff603db80239e0f5c0235e94fae2f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:08 compute-0 podman[156648]: 2025-12-05 09:58:08.257608494 +0000 UTC m=+0.134385981 container init 45e5ad94d78158cb4cb2a572259ad8f3a8bc7447349d825fa87afa2c3001683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatterjee, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 09:58:08 compute-0 podman[156648]: 2025-12-05 09:58:08.265999531 +0000 UTC m=+0.142776968 container start 45e5ad94d78158cb4cb2a572259ad8f3a8bc7447349d825fa87afa2c3001683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 09:58:08 compute-0 podman[156648]: 2025-12-05 09:58:08.269899186 +0000 UTC m=+0.146676663 container attach 45e5ad94d78158cb4cb2a572259ad8f3a8bc7447349d825fa87afa2c3001683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatterjee, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:58:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:08 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:08 compute-0 sudo[156791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciuuwxrrojzkhgqrdjbrxqxfkvzisvcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928688.1521091-62-259151588663198/AnsiballZ_file.py'
Dec 05 09:58:08 compute-0 sudo[156791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]: {
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:     "1": [
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:         {
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             "devices": [
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "/dev/loop3"
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             ],
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             "lv_name": "ceph_lv0",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             "lv_size": "21470642176",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             "name": "ceph_lv0",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             "tags": {
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.cluster_name": "ceph",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.crush_device_class": "",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.encrypted": "0",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.osd_id": "1",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.type": "block",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.vdo": "0",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:                 "ceph.with_tpm": "0"
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             },
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             "type": "block",
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:             "vg_name": "ceph_vg0"
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:         }
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]:     ]
Dec 05 09:58:08 compute-0 quizzical_chatterjee[156712]: }
Dec 05 09:58:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:08.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:08 compute-0 systemd[1]: libpod-45e5ad94d78158cb4cb2a572259ad8f3a8bc7447349d825fa87afa2c3001683d.scope: Deactivated successfully.
Dec 05 09:58:08 compute-0 podman[156648]: 2025-12-05 09:58:08.595750927 +0000 UTC m=+0.472528374 container died 45e5ad94d78158cb4cb2a572259ad8f3a8bc7447349d825fa87afa2c3001683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:58:08 compute-0 python3.9[156793]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:08 compute-0 sudo[156791]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:08 compute-0 ceph-mon[74418]: pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 483 B/s rd, 96 B/s wr, 0 op/s
Dec 05 09:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4a561aa202c85e1d63bfb506adffaa5d7ff603db80239e0f5c0235e94fae2f0-merged.mount: Deactivated successfully.
Dec 05 09:58:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 774 B/s rd, 96 B/s wr, 1 op/s
Dec 05 09:58:08 compute-0 podman[156648]: 2025-12-05 09:58:08.796641254 +0000 UTC m=+0.673418701 container remove 45e5ad94d78158cb4cb2a572259ad8f3a8bc7447349d825fa87afa2c3001683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:58:08 compute-0 systemd[1]: libpod-conmon-45e5ad94d78158cb4cb2a572259ad8f3a8bc7447349d825fa87afa2c3001683d.scope: Deactivated successfully.
Dec 05 09:58:08 compute-0 sudo[156181]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:08.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:58:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:08.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:58:08 compute-0 sudo[156834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:08 compute-0 sudo[156834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:08 compute-0 sudo[156834]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:08 compute-0 sudo[156885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:58:08 compute-0 sudo[156885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:09 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:09 compute-0 podman[157048]: 2025-12-05 09:58:09.507779163 +0000 UTC m=+0.045064278 container create d03c1a54b193930dea69be05e970c360dde68c98d3911647aa0965ea3a45781a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Dec 05 09:58:09 compute-0 systemd[1]: Started libpod-conmon-d03c1a54b193930dea69be05e970c360dde68c98d3911647aa0965ea3a45781a.scope.
Dec 05 09:58:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:58:09 compute-0 podman[157048]: 2025-12-05 09:58:09.489059107 +0000 UTC m=+0.026344242 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:58:09 compute-0 python3.9[157019]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:58:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:09 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:09.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:09 compute-0 podman[157048]: 2025-12-05 09:58:09.840906701 +0000 UTC m=+0.378191866 container init d03c1a54b193930dea69be05e970c360dde68c98d3911647aa0965ea3a45781a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:58:09 compute-0 podman[157048]: 2025-12-05 09:58:09.853324546 +0000 UTC m=+0.390609661 container start d03c1a54b193930dea69be05e970c360dde68c98d3911647aa0965ea3a45781a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 09:58:09 compute-0 podman[157048]: 2025-12-05 09:58:09.858347272 +0000 UTC m=+0.395632397 container attach d03c1a54b193930dea69be05e970c360dde68c98d3911647aa0965ea3a45781a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mcnulty, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:58:09 compute-0 kind_mcnulty[157064]: 167 167
Dec 05 09:58:09 compute-0 systemd[1]: libpod-d03c1a54b193930dea69be05e970c360dde68c98d3911647aa0965ea3a45781a.scope: Deactivated successfully.
Dec 05 09:58:09 compute-0 conmon[157064]: conmon d03c1a54b193930dea69 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d03c1a54b193930dea69be05e970c360dde68c98d3911647aa0965ea3a45781a.scope/container/memory.events
Dec 05 09:58:09 compute-0 ceph-mon[74418]: pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 774 B/s rd, 96 B/s wr, 1 op/s
Dec 05 09:58:09 compute-0 podman[157048]: 2025-12-05 09:58:09.863140792 +0000 UTC m=+0.400425907 container died d03c1a54b193930dea69be05e970c360dde68c98d3911647aa0965ea3a45781a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 09:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e06b7d726a83cfb9e6fd87d7bf6a688ffcc9a75500fc54171cc012ade545d0a6-merged.mount: Deactivated successfully.
Dec 05 09:58:09 compute-0 podman[157048]: 2025-12-05 09:58:09.968744114 +0000 UTC m=+0.506029229 container remove d03c1a54b193930dea69be05e970c360dde68c98d3911647aa0965ea3a45781a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mcnulty, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 09:58:09 compute-0 systemd[1]: libpod-conmon-d03c1a54b193930dea69be05e970c360dde68c98d3911647aa0965ea3a45781a.scope: Deactivated successfully.
Dec 05 09:58:10 compute-0 podman[157165]: 2025-12-05 09:58:10.132993451 +0000 UTC m=+0.040122895 container create be42d51ffca18b2deebed4f4203a91543f09e6abbc98f524b85a4ff4a1ded19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_aryabhata, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 09:58:10 compute-0 systemd[1]: Started libpod-conmon-be42d51ffca18b2deebed4f4203a91543f09e6abbc98f524b85a4ff4a1ded19a.scope.
Dec 05 09:58:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfcb0f062599f26a22579d8e2b18c30b4cc5b1dbe70bf845aa0f39298a969a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfcb0f062599f26a22579d8e2b18c30b4cc5b1dbe70bf845aa0f39298a969a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfcb0f062599f26a22579d8e2b18c30b4cc5b1dbe70bf845aa0f39298a969a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bfcb0f062599f26a22579d8e2b18c30b4cc5b1dbe70bf845aa0f39298a969a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:10 compute-0 podman[157165]: 2025-12-05 09:58:10.115615291 +0000 UTC m=+0.022744765 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:58:10 compute-0 podman[157165]: 2025-12-05 09:58:10.215963192 +0000 UTC m=+0.123092636 container init be42d51ffca18b2deebed4f4203a91543f09e6abbc98f524b85a4ff4a1ded19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:58:10 compute-0 podman[157165]: 2025-12-05 09:58:10.223098814 +0000 UTC m=+0.130228248 container start be42d51ffca18b2deebed4f4203a91543f09e6abbc98f524b85a4ff4a1ded19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 09:58:10 compute-0 podman[157165]: 2025-12-05 09:58:10.226675782 +0000 UTC m=+0.133805256 container attach be42d51ffca18b2deebed4f4203a91543f09e6abbc98f524b85a4ff4a1ded19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_aryabhata, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:58:10 compute-0 sudo[157261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abrgcelttublksteqipimjapzehgilox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928689.889636-194-91887494071594/AnsiballZ_seboolean.py'
Dec 05 09:58:10 compute-0 sudo[157261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:10 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:10 compute-0 python3.9[157263]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 05 09:58:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:58:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:10.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:58:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 290 B/s rd, 0 op/s
Dec 05 09:58:10 compute-0 lvm[157333]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:58:10 compute-0 lvm[157333]: VG ceph_vg0 finished
Dec 05 09:58:10 compute-0 goofy_aryabhata[157205]: {}
Dec 05 09:58:10 compute-0 systemd[1]: libpod-be42d51ffca18b2deebed4f4203a91543f09e6abbc98f524b85a4ff4a1ded19a.scope: Deactivated successfully.
Dec 05 09:58:10 compute-0 systemd[1]: libpod-be42d51ffca18b2deebed4f4203a91543f09e6abbc98f524b85a4ff4a1ded19a.scope: Consumed 1.183s CPU time.
Dec 05 09:58:10 compute-0 podman[157165]: 2025-12-05 09:58:10.988464938 +0000 UTC m=+0.895594412 container died be42d51ffca18b2deebed4f4203a91543f09e6abbc98f524b85a4ff4a1ded19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 09:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-75bfcb0f062599f26a22579d8e2b18c30b4cc5b1dbe70bf845aa0f39298a969a-merged.mount: Deactivated successfully.
Dec 05 09:58:11 compute-0 podman[157165]: 2025-12-05 09:58:11.099298802 +0000 UTC m=+1.006428256 container remove be42d51ffca18b2deebed4f4203a91543f09e6abbc98f524b85a4ff4a1ded19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_aryabhata, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 09:58:11 compute-0 systemd[1]: libpod-conmon-be42d51ffca18b2deebed4f4203a91543f09e6abbc98f524b85a4ff4a1ded19a.scope: Deactivated successfully.
Dec 05 09:58:11 compute-0 sudo[156885]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:58:11 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:58:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:58:11 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:58:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:11 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17080018f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:11 compute-0 sudo[157349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:58:11 compute-0 sudo[157349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:11 compute-0 sudo[157349]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:11 compute-0 sudo[157261]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:11 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:11.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:12 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:12.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 290 B/s rd, 0 op/s
Dec 05 09:58:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:58:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:58:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:13 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:13 compute-0 python3.9[157525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:13 compute-0 ceph-mon[74418]: pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 290 B/s rd, 0 op/s
Dec 05 09:58:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:58:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:58:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:13 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17080018f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:13.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:14 compute-0 python3.9[157647]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928691.7455869-218-129140854030818/.source follow=False _original_basename=haproxy.j2 checksum=cc5e97ea900947bff0c19d73b88d99840e041f49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:14 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:14 compute-0 ceph-mon[74418]: pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 290 B/s rd, 0 op/s
Dec 05 09:58:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:58:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:14.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:14 compute-0 python3.9[157799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 290 B/s rd, 0 op/s
Dec 05 09:58:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:15 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:15 compute-0 python3.9[157920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928694.2873137-263-246559379587326/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:15 compute-0 ceph-mon[74418]: pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 290 B/s rd, 0 op/s
Dec 05 09:58:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:15] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 09:58:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:15] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 09:58:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:15 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:15.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:16 compute-0 sudo[158071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgdgsllmgcdczanwknmjpspqtnwazsnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928695.8408344-314-56725238231704/AnsiballZ_setup.py'
Dec 05 09:58:16 compute-0 sudo[158071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:16 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17080018f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:16 compute-0 python3.9[158073]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:58:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:16.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:16 compute-0 sudo[158071]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 0 op/s
Dec 05 09:58:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:16.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:58:17 compute-0 sudo[158156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jezlmntipbqnqcrnlasflkjbcrgzjrfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928695.8408344-314-56725238231704/AnsiballZ_dnf.py'
Dec 05 09:58:17 compute-0 sudo[158156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:17 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:17 compute-0 python3.9[158158]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:58:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:17 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:17.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:17 compute-0 ceph-mon[74418]: pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 0 op/s
Dec 05 09:58:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:18 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:18.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Dec 05 09:58:18 compute-0 sudo[158156]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:18.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:58:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:19 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708002d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:19 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:19 compute-0 sudo[158311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smwmtdethtbjmyjzcndvaodugemsdhxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928699.1768427-350-186652914454685/AnsiballZ_systemd.py'
Dec 05 09:58:19 compute-0 sudo[158311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:19.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:19 compute-0 ceph-mon[74418]: pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Dec 05 09:58:20 compute-0 python3.9[158313]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:58:20 compute-0 sudo[158311]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:20 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:58:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:20.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:58:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Dec 05 09:58:21 compute-0 python3.9[158468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:21 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:21 compute-0 python3.9[158589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928700.5269976-374-227212198455610/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:21 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708002d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:58:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:21.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:58:21 compute-0 ceph-mon[74418]: pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Dec 05 09:58:22 compute-0 python3.9[158739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:22 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:58:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:22.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:58:22 compute-0 python3.9[158862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928701.6697145-374-2133545231846/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Dec 05 09:58:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:23 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:23 compute-0 ovn_controller[154822]: 2025-12-05T09:58:23Z|00025|memory|INFO|16000 kB peak resident set size after 30.0 seconds
Dec 05 09:58:23 compute-0 ovn_controller[154822]: 2025-12-05T09:58:23Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Dec 05 09:58:23 compute-0 podman[158887]: 2025-12-05 09:58:23.42000752 +0000 UTC m=+0.084299138 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 05 09:58:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:23 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:23.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:23 compute-0 ceph-mon[74418]: pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Dec 05 09:58:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:24 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708002d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:24 compute-0 python3.9[159040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:24.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 09:58:25 compute-0 python3.9[159161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928704.09286-506-247769406742559/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:25 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708002d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:25] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 09:58:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:25] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 09:58:25 compute-0 python3.9[159311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:25 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:25.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:25 compute-0 ceph-mon[74418]: pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 09:58:26 compute-0 python3.9[159432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928705.209354-506-186686973744682/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:26 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:26.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 09:58:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:26.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:58:27 compute-0 python3.9[159584]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:58:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:27 compute-0 sudo[159611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:58:27 compute-0 sudo[159611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:27 compute-0 sudo[159611]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:58:27
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.log', '.rgw.root', 'vms', 'images', '.mgr', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta']
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:58:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:58:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:58:27 compute-0 sudo[159761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoauiwlxjaqeiyvhvmdmdezojvjnxouo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928707.4129837-620-79048825072198/AnsiballZ_file.py'
Dec 05 09:58:27 compute-0 sudo[159761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:58:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:58:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:27.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:27 compute-0 python3.9[159763]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:27 compute-0 sudo[159761]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:28 compute-0 ceph-mon[74418]: pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 09:58:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:58:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:28 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:28 compute-0 sudo[159915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izwuxiwdfwkotniqcikczoxixdldfwsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928708.1799848-644-266818407319775/AnsiballZ_stat.py'
Dec 05 09:58:28 compute-0 sudo[159915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:28.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:28 compute-0 python3.9[159917]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:28 compute-0 sudo[159915]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Dec 05 09:58:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:28.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:58:28 compute-0 sudo[159993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-komyiwntzjpxbfmkaavijyehsvyhkxiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928708.1799848-644-266818407319775/AnsiballZ_file.py'
Dec 05 09:58:28 compute-0 sudo[159993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:29 compute-0 python3.9[159995]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:29 compute-0 sudo[159993]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:29 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:29 compute-0 sudo[160145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgfldnlrcpixpfauphamvmbdonyruzef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928709.2942884-644-265322668240221/AnsiballZ_stat.py'
Dec 05 09:58:29 compute-0 sudo[160145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:29 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:29 compute-0 python3.9[160147]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:29.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:29 compute-0 sudo[160145]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:30 compute-0 sudo[160223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgnphvliftlqbgydybdgjebdhajdhgir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928709.2942884-644-265322668240221/AnsiballZ_file.py'
Dec 05 09:58:30 compute-0 sudo[160223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:30 compute-0 ceph-mon[74418]: pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Dec 05 09:58:30 compute-0 python3.9[160225]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:30 compute-0 sudo[160223]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:30 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:30.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 05 09:58:30 compute-0 sudo[160377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctmlknwpilifwngbfnydgpqfybjbpyfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928710.6965358-713-162218438201744/AnsiballZ_file.py'
Dec 05 09:58:30 compute-0 sudo[160377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:31 compute-0 python3.9[160379]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:58:31 compute-0 sudo[160377]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:31 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:31 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:31 compute-0 sudo[160529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkvrzmgcegatzzmgwkqexoxjvmjjvbru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928711.50172-737-6147321136042/AnsiballZ_stat.py'
Dec 05 09:58:31 compute-0 sudo[160529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:31.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:31 compute-0 python3.9[160531]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:31 compute-0 sudo[160529]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:32 compute-0 ceph-mon[74418]: pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 05 09:58:32 compute-0 sudo[160608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yezkgaprurawxtnpvgzxadgtsimukcdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928711.50172-737-6147321136042/AnsiballZ_file.py'
Dec 05 09:58:32 compute-0 sudo[160608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:32 compute-0 python3.9[160610]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:58:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:32 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:32 compute-0 sudo[160608]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:32.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 05 09:58:33 compute-0 sudo[160761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egvlaqwrigpwfzuzhtwqiijumojqzibg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928712.8424332-773-253887675478085/AnsiballZ_stat.py'
Dec 05 09:58:33 compute-0 sudo[160761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:33 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:33 compute-0 python3.9[160763]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:33 compute-0 sudo[160761]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:33 compute-0 sudo[160839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyowzwguycofqppzwdlqginvocdtukti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928712.8424332-773-253887675478085/AnsiballZ_file.py'
Dec 05 09:58:33 compute-0 sudo[160839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:33 compute-0 python3.9[160841]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:58:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:33 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:33 compute-0 sudo[160839]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:58:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:33.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:58:34 compute-0 ceph-mon[74418]: pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 05 09:58:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:34 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:34 compute-0 sudo[160993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzmybkgdxbtirzvleaccgyfloukehzyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928714.216315-809-160061380864320/AnsiballZ_systemd.py'
Dec 05 09:58:34 compute-0 sudo[160993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:34.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:34 compute-0 python3.9[160995]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:58:34 compute-0 systemd[1]: Reloading.
Dec 05 09:58:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 05 09:58:34 compute-0 systemd-sysv-generator[161019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:58:34 compute-0 systemd-rc-local-generator[161015]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:58:35 compute-0 sudo[160993]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:35 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:35] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Dec 05 09:58:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:35] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Dec 05 09:58:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:35 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:35.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:36 compute-0 sudo[161183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjgqkxicwhrujfxurwsjtutvmpekbcva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928715.7908099-833-260442170192622/AnsiballZ_stat.py'
Dec 05 09:58:36 compute-0 sudo[161183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:36 compute-0 ceph-mon[74418]: pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 05 09:58:36 compute-0 python3.9[161185]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:36 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:36 compute-0 sudo[161183]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:36.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:36 compute-0 sudo[161262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzfhldkxijevjyarrfnzztfledrerfge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928715.7908099-833-260442170192622/AnsiballZ_file.py'
Dec 05 09:58:36 compute-0 sudo[161262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:36 compute-0 python3.9[161264]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:58:36 compute-0 sudo[161262]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:36.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:58:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:37 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:37 compute-0 sudo[161414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdgsihmugbyznfhawksspvoipipsshfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928717.0395772-869-220228174222370/AnsiballZ_stat.py'
Dec 05 09:58:37 compute-0 sudo[161414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:37 compute-0 python3.9[161416]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:37 compute-0 sudo[161414]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:37 compute-0 sudo[161492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhrzbrsisayoeihycjpfmhboupjdahbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928717.0395772-869-220228174222370/AnsiballZ_file.py'
Dec 05 09:58:37 compute-0 sudo[161492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:37 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:37.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:37 compute-0 python3.9[161494]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:58:37 compute-0 sudo[161492]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:38 compute-0 ceph-mon[74418]: pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:38 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:38.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:38 compute-0 sudo[161648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybwkubcufunvdkzzupcrtbjgjuvvhwei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928718.4830422-905-38317789150553/AnsiballZ_systemd.py'
Dec 05 09:58:38 compute-0 sudo[161648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:58:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:38.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:58:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:38.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:58:39 compute-0 python3.9[161650]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:58:39 compute-0 systemd[1]: Reloading.
Dec 05 09:58:39 compute-0 systemd-rc-local-generator[161672]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:58:39 compute-0 systemd-sysv-generator[161676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:58:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:39 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:39 compute-0 systemd[1]: Starting Create netns directory...
Dec 05 09:58:39 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 09:58:39 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 09:58:39 compute-0 systemd[1]: Finished Create netns directory.
Dec 05 09:58:39 compute-0 sudo[161648]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:39 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:39.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:40 compute-0 ceph-mon[74418]: pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:58:40 compute-0 sudo[161844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxonfethertzhlpqaehlrwskzscxtbop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928720.0553055-935-146603744070122/AnsiballZ_file.py'
Dec 05 09:58:40 compute-0 sudo[161844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:40 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:40 compute-0 python3.9[161846]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:40 compute-0 sudo[161844]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:40.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:41 compute-0 sudo[161996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkifbddiorqmjgrdtjwvpkznynybqcqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928720.8611848-959-143716802129226/AnsiballZ_stat.py'
Dec 05 09:58:41 compute-0 sudo[161996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:41 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:41 compute-0 python3.9[161998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:41 compute-0 sudo[161996]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:41 compute-0 sudo[162119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbklezcirsdgriploquhamnegpmxpzie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928720.8611848-959-143716802129226/AnsiballZ_copy.py'
Dec 05 09:58:41 compute-0 sudo[162119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:41 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:41.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:41 compute-0 python3.9[162121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928720.8611848-959-143716802129226/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:41 compute-0 sudo[162119]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:42 compute-0 ceph-mon[74418]: pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:42 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:58:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:58:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:42.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:42 compute-0 sudo[162273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvmngtsjqwnydtooifagtrmvvracignv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928722.6236253-1010-179848471774817/AnsiballZ_file.py'
Dec 05 09:58:42 compute-0 sudo[162273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:43 compute-0 python3.9[162275]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:58:43 compute-0 sudo[162273]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:43 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:58:43 compute-0 sudo[162425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jooggezlftdyejilihluxzpvwopzyfuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928723.4590054-1034-159749676494784/AnsiballZ_stat.py'
Dec 05 09:58:43 compute-0 sudo[162425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:43 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:43.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:43 compute-0 python3.9[162427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:58:43 compute-0 sudo[162425]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:44 compute-0 sudo[162549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsitwkggkvnoevafhbgpqfaidlwpzdgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928723.4590054-1034-159749676494784/AnsiballZ_copy.py'
Dec 05 09:58:44 compute-0 sudo[162549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:44 compute-0 ceph-mon[74418]: pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:44 compute-0 python3.9[162551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764928723.4590054-1034-159749676494784/.source.json _original_basename=.ml60ce6w follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:58:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:44 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:44 compute-0 sudo[162549]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:44.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:45 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:45 compute-0 sudo[162702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytpzcdrxilnuumvhslwdiybwkmzzranz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928725.0039515-1079-15008393426261/AnsiballZ_file.py'
Dec 05 09:58:45 compute-0 sudo[162702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:45 compute-0 python3.9[162704]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:58:45 compute-0 sudo[162702]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:45] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 09:58:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:45] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 09:58:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:45 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e40037f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:45.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:46 compute-0 sudo[162855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkoajufjrseuzvgkfqodubxrkteoasbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928725.8435984-1103-67655708093568/AnsiballZ_stat.py'
Dec 05 09:58:46 compute-0 sudo[162855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:46 compute-0 sudo[162855]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:46 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:46 compute-0 ceph-mon[74418]: pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:46.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:46 compute-0 sudo[162979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svmabookgsxfgxfhqpsuvnrkmwrnskcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928725.8435984-1103-67655708093568/AnsiballZ_copy.py'
Dec 05 09:58:46 compute-0 sudo[162979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:46 compute-0 sudo[162979]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:46.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:58:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:47 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:47 compute-0 sudo[163041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:58:47 compute-0 sudo[163041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:58:47 compute-0 sudo[163041]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:47 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:47.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:47 compute-0 sudo[163156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mobbceqptptxtjrxfrklgfagurihpygl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928727.3748646-1154-162123092203548/AnsiballZ_container_config_data.py'
Dec 05 09:58:47 compute-0 sudo[163156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:48 compute-0 python3.9[163158]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 05 09:58:48 compute-0 sudo[163156]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:48 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:48 compute-0 ceph-mon[74418]: pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:48.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:48 compute-0 sudo[163310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmvdfvvpwxcyjabkclmbrwflzqufsejp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928728.3494658-1181-194618152149484/AnsiballZ_container_config_hash.py'
Dec 05 09:58:48 compute-0 sudo[163310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:58:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:48.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:58:49 compute-0 python3.9[163312]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:58:49 compute-0 sudo[163310]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:49 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:49 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:49 compute-0 sudo[163462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pssjehzkcudpgazwiprjhbdrqzszocbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928729.3674202-1208-47542522020629/AnsiballZ_podman_container_info.py'
Dec 05 09:58:49 compute-0 sudo[163462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:49.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:49 compute-0 python3.9[163464]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 09:58:50 compute-0 sudo[163462]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:50 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:50.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:51 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:51 compute-0 ceph-mon[74418]: pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:58:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:51 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:51.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:52 compute-0 sudo[163643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtcemlfwxgrgrwtcmthjbwiuwradbagq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764928731.4102185-1247-94827261606356/AnsiballZ_edpm_container_manage.py'
Dec 05 09:58:52 compute-0 sudo[163643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:58:52 compute-0 python3[163645]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:58:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:52 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:52.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:52 compute-0 ceph-mon[74418]: pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:53 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:53 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:53.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:54 compute-0 ceph-mon[74418]: pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:54 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:54 compute-0 podman[163693]: 2025-12-05 09:58:54.448228366 +0000 UTC m=+0.101891067 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:58:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:54.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:55 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:55] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 09:58:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:58:55] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 09:58:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:55 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:58:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:55.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:58:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:56 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:56.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:58:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:56 compute-0 ceph-mon[74418]: pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:56.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:58:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:56.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:58:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:56.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:58:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:57 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:58:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:58:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:58:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:58:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:58:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:58:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:58:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:58:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:57 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:57.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:58:58 compute-0 ceph-mon[74418]: pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:58:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:58:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:58 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:58:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:58:58.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:58:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:58:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:58:58.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:58:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:59 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:58:59 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:58:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:58:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:58:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:58:59.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:00 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:00.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:01 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:01 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:01.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:02 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:02.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:03 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:03 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:03.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:04 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:04.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:59:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:05 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:05] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 09:59:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:05] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 09:59:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:05 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:05.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:06 compute-0 podman[163662]: 2025-12-05 09:59:06.050485553 +0000 UTC m=+13.632998166 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 05 09:59:06 compute-0 podman[163819]: 2025-12-05 09:59:06.191609103 +0000 UTC m=+0.055649351 container create 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 09:59:06 compute-0 podman[163819]: 2025-12-05 09:59:06.15868997 +0000 UTC m=+0.022730238 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 05 09:59:06 compute-0 python3[163645]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 05 09:59:06 compute-0 sudo[163643]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:06 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:06.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:06.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:59:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:07 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:07 compute-0 sudo[163883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:59:07 compute-0 sudo[163883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:07 compute-0 sudo[163883]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:07 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:59:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:07.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:59:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:08 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1700004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:08.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:59:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:08.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:59:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:09 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16dc002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:59:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:09 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:09.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:10 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:11 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:11 compute-0 sudo[163914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:11 compute-0 sudo[163914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:11 compute-0 sudo[163914]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:11 compute-0 sudo[163939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 09:59:11 compute-0 sudo[163939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:11 compute-0 ceph-mon[74418]: pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:59:11 compute-0 ceph-mon[74418]: pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:11 compute-0 ceph-mon[74418]: pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:11 compute-0 ceph-mon[74418]: pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:11 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:11.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:12 compute-0 sudo[163939]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:12 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:59:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:59:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:12.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:12 compute-0 ceph-mon[74418]: pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:12 compute-0 ceph-mon[74418]: pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 09:59:12 compute-0 ceph-mon[74418]: pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:59:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:59:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:59:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:59:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 306 B/s rd, 0 op/s
Dec 05 09:59:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:59:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:59:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 09:59:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:59:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:59:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:59:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 09:59:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:59:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:59:13 compute-0 sudo[163999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:13 compute-0 sudo[163999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:13 compute-0 sudo[163999]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:13 compute-0 sudo[164024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 09:59:13 compute-0 sudo[164024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:13 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c002590 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:13 compute-0 sudo[164221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egslybxtitlpqhmaewnemjppwndpzvxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928753.2227628-1271-266960228478404/AnsiballZ_stat.py'
Dec 05 09:59:13 compute-0 sudo[164221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:13 compute-0 podman[164195]: 2025-12-05 09:59:13.523273489 +0000 UTC m=+0.057238394 container create 0e16627d29fcb5c1ad708770148b744ddbee5192c6ab292eee8bd14702fe71be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 09:59:13 compute-0 systemd[1]: Started libpod-conmon-0e16627d29fcb5c1ad708770148b744ddbee5192c6ab292eee8bd14702fe71be.scope.
Dec 05 09:59:13 compute-0 podman[164195]: 2025-12-05 09:59:13.498463265 +0000 UTC m=+0.032428190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:59:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:59:13 compute-0 python3.9[164229]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:59:13 compute-0 podman[164195]: 2025-12-05 09:59:13.697409245 +0000 UTC m=+0.231374260 container init 0e16627d29fcb5c1ad708770148b744ddbee5192c6ab292eee8bd14702fe71be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 09:59:13 compute-0 podman[164195]: 2025-12-05 09:59:13.709100313 +0000 UTC m=+0.243065238 container start 0e16627d29fcb5c1ad708770148b744ddbee5192c6ab292eee8bd14702fe71be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:59:13 compute-0 mystifying_snyder[164233]: 167 167
Dec 05 09:59:13 compute-0 systemd[1]: libpod-0e16627d29fcb5c1ad708770148b744ddbee5192c6ab292eee8bd14702fe71be.scope: Deactivated successfully.
Dec 05 09:59:13 compute-0 sudo[164221]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:13 compute-0 podman[164195]: 2025-12-05 09:59:13.756065237 +0000 UTC m=+0.290030172 container attach 0e16627d29fcb5c1ad708770148b744ddbee5192c6ab292eee8bd14702fe71be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 09:59:13 compute-0 podman[164195]: 2025-12-05 09:59:13.757390953 +0000 UTC m=+0.291355888 container died 0e16627d29fcb5c1ad708770148b744ddbee5192c6ab292eee8bd14702fe71be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:59:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:13 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8001f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:13.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-23de80be472d2bc43672fef52e038e61b1e761aa5d09bb4051c9c54023ff06ef-merged.mount: Deactivated successfully.
Dec 05 09:59:14 compute-0 ceph-mon[74418]: pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 09:59:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:59:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 09:59:14 compute-0 ceph-mon[74418]: pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 306 B/s rd, 0 op/s
Dec 05 09:59:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:59:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:59:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 09:59:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 09:59:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 09:59:14 compute-0 podman[164195]: 2025-12-05 09:59:14.087052961 +0000 UTC m=+0.621017896 container remove 0e16627d29fcb5c1ad708770148b744ddbee5192c6ab292eee8bd14702fe71be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:59:14 compute-0 systemd[1]: libpod-conmon-0e16627d29fcb5c1ad708770148b744ddbee5192c6ab292eee8bd14702fe71be.scope: Deactivated successfully.
Dec 05 09:59:14 compute-0 podman[164356]: 2025-12-05 09:59:14.260639462 +0000 UTC m=+0.023860678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:59:14 compute-0 sudo[164426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ithhojnrikmypltlhgjytophsgtsujim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928754.1110373-1298-37763101212029/AnsiballZ_file.py'
Dec 05 09:59:14 compute-0 sudo[164426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:14 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:14 compute-0 podman[164356]: 2025-12-05 09:59:14.440402112 +0000 UTC m=+0.203623318 container create 5c750af417980c1843ab6fee5e67f830ea9d80d83928bf38321f95e6800b5bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_einstein, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 09:59:14 compute-0 systemd[1]: Started libpod-conmon-5c750af417980c1843ab6fee5e67f830ea9d80d83928bf38321f95e6800b5bd3.scope.
Dec 05 09:59:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0fa8a43894ae554bb325535af89e7da1f185aad6c18a1ebfe538c00ddc1cc3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0fa8a43894ae554bb325535af89e7da1f185aad6c18a1ebfe538c00ddc1cc3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0fa8a43894ae554bb325535af89e7da1f185aad6c18a1ebfe538c00ddc1cc3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0fa8a43894ae554bb325535af89e7da1f185aad6c18a1ebfe538c00ddc1cc3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0fa8a43894ae554bb325535af89e7da1f185aad6c18a1ebfe538c00ddc1cc3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:14 compute-0 podman[164356]: 2025-12-05 09:59:14.581481151 +0000 UTC m=+0.344702377 container init 5c750af417980c1843ab6fee5e67f830ea9d80d83928bf38321f95e6800b5bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_einstein, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 09:59:14 compute-0 podman[164356]: 2025-12-05 09:59:14.592266274 +0000 UTC m=+0.355487480 container start 5c750af417980c1843ab6fee5e67f830ea9d80d83928bf38321f95e6800b5bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 09:59:14 compute-0 python3.9[164428]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:14 compute-0 podman[164356]: 2025-12-05 09:59:14.602517732 +0000 UTC m=+0.365738928 container attach 5c750af417980c1843ab6fee5e67f830ea9d80d83928bf38321f95e6800b5bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 09:59:14 compute-0 sudo[164426]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:59:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:14.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:14 compute-0 sudo[164514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aruvfatfxxhyredxsdyizmlxsvulowac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928754.1110373-1298-37763101212029/AnsiballZ_stat.py'
Dec 05 09:59:14 compute-0 sudo[164514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 306 B/s rd, 0 op/s
Dec 05 09:59:14 compute-0 nostalgic_einstein[164431]: --> passed data devices: 0 physical, 1 LVM
Dec 05 09:59:14 compute-0 nostalgic_einstein[164431]: --> All data devices are unavailable
Dec 05 09:59:14 compute-0 systemd[1]: libpod-5c750af417980c1843ab6fee5e67f830ea9d80d83928bf38321f95e6800b5bd3.scope: Deactivated successfully.
Dec 05 09:59:14 compute-0 podman[164356]: 2025-12-05 09:59:14.922209299 +0000 UTC m=+0.685430495 container died 5c750af417980c1843ab6fee5e67f830ea9d80d83928bf38321f95e6800b5bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_einstein, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0fa8a43894ae554bb325535af89e7da1f185aad6c18a1ebfe538c00ddc1cc3f-merged.mount: Deactivated successfully.
Dec 05 09:59:15 compute-0 python3.9[164517]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:59:15 compute-0 sudo[164514]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 compute-0 podman[164356]: 2025-12-05 09:59:15.087199447 +0000 UTC m=+0.850420643 container remove 5c750af417980c1843ab6fee5e67f830ea9d80d83928bf38321f95e6800b5bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 09:59:15 compute-0 systemd[1]: libpod-conmon-5c750af417980c1843ab6fee5e67f830ea9d80d83928bf38321f95e6800b5bd3.scope: Deactivated successfully.
Dec 05 09:59:15 compute-0 sudo[164024]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 compute-0 sudo[164561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:15 compute-0 sudo[164561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:15 compute-0 sudo[164561]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 compute-0 sudo[164611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 09:59:15 compute-0 sudo[164611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:15 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:15 compute-0 sudo[164757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fttywptzemglpajbvirjzplikhxjiuwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928755.1329653-1298-21460761323173/AnsiballZ_copy.py'
Dec 05 09:59:15 compute-0 sudo[164757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:15] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec 05 09:59:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:15] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec 05 09:59:15 compute-0 podman[164775]: 2025-12-05 09:59:15.68452699 +0000 UTC m=+0.046395131 container create 50d7e4cc4396c34a9731fb43e820e1efea147ddf5cae187f8c85d054e2fa7d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 09:59:15 compute-0 systemd[1]: Started libpod-conmon-50d7e4cc4396c34a9731fb43e820e1efea147ddf5cae187f8c85d054e2fa7d2f.scope.
Dec 05 09:59:15 compute-0 python3.9[164761]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764928755.1329653-1298-21460761323173/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:59:15 compute-0 podman[164775]: 2025-12-05 09:59:15.665899174 +0000 UTC m=+0.027767065 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:59:15 compute-0 sudo[164757]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 compute-0 podman[164775]: 2025-12-05 09:59:15.777302988 +0000 UTC m=+0.139170889 container init 50d7e4cc4396c34a9731fb43e820e1efea147ddf5cae187f8c85d054e2fa7d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 09:59:15 compute-0 podman[164775]: 2025-12-05 09:59:15.785938512 +0000 UTC m=+0.147806403 container start 50d7e4cc4396c34a9731fb43e820e1efea147ddf5cae187f8c85d054e2fa7d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rhodes, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:59:15 compute-0 podman[164775]: 2025-12-05 09:59:15.789380105 +0000 UTC m=+0.151248026 container attach 50d7e4cc4396c34a9731fb43e820e1efea147ddf5cae187f8c85d054e2fa7d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rhodes, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 09:59:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:15 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:15 compute-0 angry_rhodes[164791]: 167 167
Dec 05 09:59:15 compute-0 systemd[1]: libpod-50d7e4cc4396c34a9731fb43e820e1efea147ddf5cae187f8c85d054e2fa7d2f.scope: Deactivated successfully.
Dec 05 09:59:15 compute-0 podman[164775]: 2025-12-05 09:59:15.791690318 +0000 UTC m=+0.153558229 container died 50d7e4cc4396c34a9731fb43e820e1efea147ddf5cae187f8c85d054e2fa7d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rhodes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:59:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2ddeadf70815f30e80d4de278013d7888080a287f1114a63ecf0571f58a175d-merged.mount: Deactivated successfully.
Dec 05 09:59:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:15.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:15 compute-0 podman[164775]: 2025-12-05 09:59:15.829095623 +0000 UTC m=+0.190963504 container remove 50d7e4cc4396c34a9731fb43e820e1efea147ddf5cae187f8c85d054e2fa7d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rhodes, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 09:59:15 compute-0 systemd[1]: libpod-conmon-50d7e4cc4396c34a9731fb43e820e1efea147ddf5cae187f8c85d054e2fa7d2f.scope: Deactivated successfully.
Dec 05 09:59:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095915 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:59:16 compute-0 sudo[164898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afitelpuxlbeoxfblmxbouzcnignnoli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928755.1329653-1298-21460761323173/AnsiballZ_systemd.py'
Dec 05 09:59:16 compute-0 podman[164859]: 2025-12-05 09:59:16.029225945 +0000 UTC m=+0.068071438 container create 4cde33253824d2ba806fea8cdf56db87a9f1f192df040aa7e1f99b8d34da7699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_albattani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 09:59:16 compute-0 sudo[164898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:16 compute-0 systemd[1]: Started libpod-conmon-4cde33253824d2ba806fea8cdf56db87a9f1f192df040aa7e1f99b8d34da7699.scope.
Dec 05 09:59:16 compute-0 ceph-mon[74418]: pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 306 B/s rd, 0 op/s
Dec 05 09:59:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:59:16 compute-0 podman[164859]: 2025-12-05 09:59:16.006000265 +0000 UTC m=+0.044845768 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4dc2cb07c1aa1f332203a68d1c9032a23664a9c165774e1653823f9800c6e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4dc2cb07c1aa1f332203a68d1c9032a23664a9c165774e1653823f9800c6e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4dc2cb07c1aa1f332203a68d1c9032a23664a9c165774e1653823f9800c6e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4dc2cb07c1aa1f332203a68d1c9032a23664a9c165774e1653823f9800c6e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:16 compute-0 podman[164859]: 2025-12-05 09:59:16.122791845 +0000 UTC m=+0.161637358 container init 4cde33253824d2ba806fea8cdf56db87a9f1f192df040aa7e1f99b8d34da7699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_albattani, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 09:59:16 compute-0 podman[164859]: 2025-12-05 09:59:16.131350997 +0000 UTC m=+0.170196490 container start 4cde33253824d2ba806fea8cdf56db87a9f1f192df040aa7e1f99b8d34da7699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_albattani, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:59:16 compute-0 podman[164859]: 2025-12-05 09:59:16.135976903 +0000 UTC m=+0.174822376 container attach 4cde33253824d2ba806fea8cdf56db87a9f1f192df040aa7e1f99b8d34da7699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_albattani, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 05 09:59:16 compute-0 python3.9[164903]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:59:16 compute-0 systemd[1]: Reloading.
Dec 05 09:59:16 compute-0 goofy_albattani[164907]: {
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:     "1": [
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:         {
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             "devices": [
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "/dev/loop3"
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             ],
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             "lv_name": "ceph_lv0",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             "lv_size": "21470642176",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             "name": "ceph_lv0",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:59:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:16 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8001f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             "tags": {
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.cluster_name": "ceph",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.crush_device_class": "",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.encrypted": "0",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.osd_id": "1",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.type": "block",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.vdo": "0",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:                 "ceph.with_tpm": "0"
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             },
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             "type": "block",
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:             "vg_name": "ceph_vg0"
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:         }
Dec 05 09:59:16 compute-0 goofy_albattani[164907]:     ]
Dec 05 09:59:16 compute-0 goofy_albattani[164907]: }
Dec 05 09:59:16 compute-0 podman[164859]: 2025-12-05 09:59:16.454583651 +0000 UTC m=+0.493429144 container died 4cde33253824d2ba806fea8cdf56db87a9f1f192df040aa7e1f99b8d34da7699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:59:16 compute-0 systemd-rc-local-generator[164940]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:59:16 compute-0 systemd-sysv-generator[164943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:59:16 compute-0 systemd[1]: libpod-4cde33253824d2ba806fea8cdf56db87a9f1f192df040aa7e1f99b8d34da7699.scope: Deactivated successfully.
Dec 05 09:59:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:16.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c4dc2cb07c1aa1f332203a68d1c9032a23664a9c165774e1653823f9800c6e5-merged.mount: Deactivated successfully.
Dec 05 09:59:16 compute-0 sudo[164898]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:16 compute-0 podman[164859]: 2025-12-05 09:59:16.717866657 +0000 UTC m=+0.756712130 container remove 4cde33253824d2ba806fea8cdf56db87a9f1f192df040aa7e1f99b8d34da7699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_albattani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:59:16 compute-0 systemd[1]: libpod-conmon-4cde33253824d2ba806fea8cdf56db87a9f1f192df040aa7e1f99b8d34da7699.scope: Deactivated successfully.
Dec 05 09:59:16 compute-0 sudo[164611]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 306 B/s rd, 0 op/s
Dec 05 09:59:16 compute-0 sudo[164984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:16 compute-0 sudo[164984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:16 compute-0 sudo[164984]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:16 compute-0 sudo[165027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 09:59:16 compute-0 sudo[165027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:16.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:59:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:16.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:59:16 compute-0 sudo[165090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkbnuxmjkpzhzdnjmgmjdttzgxcqxznw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928755.1329653-1298-21460761323173/AnsiballZ_systemd.py'
Dec 05 09:59:16 compute-0 sudo[165090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:17 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:17 compute-0 python3.9[165092]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:59:17 compute-0 systemd[1]: Reloading.
Dec 05 09:59:17 compute-0 podman[165135]: 2025-12-05 09:59:17.409511989 +0000 UTC m=+0.049168016 container create a2f372a73963965e8795b301570fcbe6388bc38c3a902544d007b6458ea8d797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yalow, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:59:17 compute-0 systemd-rc-local-generator[165170]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:59:17 compute-0 systemd-sysv-generator[165173]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:59:17 compute-0 podman[165135]: 2025-12-05 09:59:17.390812551 +0000 UTC m=+0.030468588 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:59:17 compute-0 systemd[1]: Started libpod-conmon-a2f372a73963965e8795b301570fcbe6388bc38c3a902544d007b6458ea8d797.scope.
Dec 05 09:59:17 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 05 09:59:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:59:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:17 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:17 compute-0 podman[165135]: 2025-12-05 09:59:17.799279508 +0000 UTC m=+0.438935575 container init a2f372a73963965e8795b301570fcbe6388bc38c3a902544d007b6458ea8d797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 09:59:17 compute-0 podman[165135]: 2025-12-05 09:59:17.80932533 +0000 UTC m=+0.448981347 container start a2f372a73963965e8795b301570fcbe6388bc38c3a902544d007b6458ea8d797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yalow, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 09:59:17 compute-0 podman[165135]: 2025-12-05 09:59:17.812429155 +0000 UTC m=+0.452085272 container attach a2f372a73963965e8795b301570fcbe6388bc38c3a902544d007b6458ea8d797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:59:17 compute-0 ecstatic_yalow[165186]: 167 167
Dec 05 09:59:17 compute-0 systemd[1]: libpod-a2f372a73963965e8795b301570fcbe6388bc38c3a902544d007b6458ea8d797.scope: Deactivated successfully.
Dec 05 09:59:17 compute-0 podman[165135]: 2025-12-05 09:59:17.815704774 +0000 UTC m=+0.455360791 container died a2f372a73963965e8795b301570fcbe6388bc38c3a902544d007b6458ea8d797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yalow, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 09:59:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:17.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-26f5ffa58230cde0e2e9b7fd56e348b0615e3f61622ce14c386a2c52272aaafa-merged.mount: Deactivated successfully.
Dec 05 09:59:17 compute-0 podman[165135]: 2025-12-05 09:59:17.853457628 +0000 UTC m=+0.493113665 container remove a2f372a73963965e8795b301570fcbe6388bc38c3a902544d007b6458ea8d797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 09:59:17 compute-0 systemd[1]: libpod-conmon-a2f372a73963965e8795b301570fcbe6388bc38c3a902544d007b6458ea8d797.scope: Deactivated successfully.
Dec 05 09:59:18 compute-0 podman[165222]: 2025-12-05 09:59:18.077970372 +0000 UTC m=+0.104954789 container create e442053279d0946dcf44962ef9b7ab63d517132a358149098ca6439ce4bceebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 05 09:59:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18157d9297c562874260fffe1320b6336fffca3effa116402ffd345b8346c24/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18157d9297c562874260fffe1320b6336fffca3effa116402ffd345b8346c24/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:18 compute-0 systemd[1]: Started libpod-conmon-e442053279d0946dcf44962ef9b7ab63d517132a358149098ca6439ce4bceebe.scope.
Dec 05 09:59:18 compute-0 ceph-mon[74418]: pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 306 B/s rd, 0 op/s
Dec 05 09:59:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 09:59:18 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a.
Dec 05 09:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6acd4bf92e36b97e45a031afcc7c3f963b21882b4bfb869da508b11c07e84561/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6acd4bf92e36b97e45a031afcc7c3f963b21882b4bfb869da508b11c07e84561/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6acd4bf92e36b97e45a031afcc7c3f963b21882b4bfb869da508b11c07e84561/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6acd4bf92e36b97e45a031afcc7c3f963b21882b4bfb869da508b11c07e84561/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:18 compute-0 podman[165222]: 2025-12-05 09:59:18.051379251 +0000 UTC m=+0.078363688 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:59:18 compute-0 podman[165190]: 2025-12-05 09:59:18.152848375 +0000 UTC m=+0.456753858 container init 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 09:59:18 compute-0 podman[165222]: 2025-12-05 09:59:18.160909564 +0000 UTC m=+0.187894001 container init e442053279d0946dcf44962ef9b7ab63d517132a358149098ca6439ce4bceebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_grothendieck, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:59:18 compute-0 podman[165222]: 2025-12-05 09:59:18.170860983 +0000 UTC m=+0.197845400 container start e442053279d0946dcf44962ef9b7ab63d517132a358149098ca6439ce4bceebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: + sudo -E kolla_set_configs
Dec 05 09:59:18 compute-0 podman[165222]: 2025-12-05 09:59:18.17554069 +0000 UTC m=+0.202525097 container attach e442053279d0946dcf44962ef9b7ab63d517132a358149098ca6439ce4bceebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 09:59:18 compute-0 podman[165190]: 2025-12-05 09:59:18.207091057 +0000 UTC m=+0.510996510 container start 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 09:59:18 compute-0 edpm-start-podman-container[165190]: ovn_metadata_agent
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Validating config file
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Copying service configuration files
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Writing out command to execute
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 05 09:59:18 compute-0 edpm-start-podman-container[165188]: Creating additional drop-in dependency for "ovn_metadata_agent" (67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a)
Dec 05 09:59:18 compute-0 podman[165253]: 2025-12-05 09:59:18.273738136 +0000 UTC m=+0.068238283 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: ++ cat /run_command
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: + CMD=neutron-ovn-metadata-agent
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: + ARGS=
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: + sudo kolla_copy_cacerts
Dec 05 09:59:18 compute-0 systemd[1]: Reloading.
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: + [[ ! -n '' ]]
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: + . kolla_extend_start
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: Running command: 'neutron-ovn-metadata-agent'
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: + umask 0022
Dec 05 09:59:18 compute-0 ovn_metadata_agent[165238]: + exec neutron-ovn-metadata-agent
Dec 05 09:59:18 compute-0 systemd-rc-local-generator[165321]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:59:18 compute-0 systemd-sysv-generator[165325]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:59:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:18 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:18 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 05 09:59:18 compute-0 sudo[165090]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:18.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Dec 05 09:59:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:18.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:59:18 compute-0 lvm[165427]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:59:18 compute-0 lvm[165427]: VG ceph_vg0 finished
Dec 05 09:59:18 compute-0 zen_grothendieck[165244]: {}
Dec 05 09:59:19 compute-0 systemd[1]: libpod-e442053279d0946dcf44962ef9b7ab63d517132a358149098ca6439ce4bceebe.scope: Deactivated successfully.
Dec 05 09:59:19 compute-0 systemd[1]: libpod-e442053279d0946dcf44962ef9b7ab63d517132a358149098ca6439ce4bceebe.scope: Consumed 1.169s CPU time.
Dec 05 09:59:19 compute-0 podman[165432]: 2025-12-05 09:59:19.059537544 +0000 UTC m=+0.030774926 container died e442053279d0946dcf44962ef9b7ab63d517132a358149098ca6439ce4bceebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_grothendieck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 09:59:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6acd4bf92e36b97e45a031afcc7c3f963b21882b4bfb869da508b11c07e84561-merged.mount: Deactivated successfully.
Dec 05 09:59:19 compute-0 sshd-session[155436]: Connection closed by 192.168.122.30 port 49078
Dec 05 09:59:19 compute-0 sshd-session[155433]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:59:19 compute-0 systemd-logind[789]: Session 52 logged out. Waiting for processes to exit.
Dec 05 09:59:19 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec 05 09:59:19 compute-0 systemd[1]: session-52.scope: Consumed 56.732s CPU time.
Dec 05 09:59:19 compute-0 podman[165432]: 2025-12-05 09:59:19.108911704 +0000 UTC m=+0.080149076 container remove e442053279d0946dcf44962ef9b7ab63d517132a358149098ca6439ce4bceebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_grothendieck, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 05 09:59:19 compute-0 systemd-logind[789]: Removed session 52.
Dec 05 09:59:19 compute-0 systemd[1]: libpod-conmon-e442053279d0946dcf44962ef9b7ab63d517132a358149098ca6439ce4bceebe.scope: Deactivated successfully.
Dec 05 09:59:19 compute-0 sudo[165027]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 09:59:19 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:59:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 09:59:19 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:59:19 compute-0 sudo[165447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:59:19 compute-0 sudo[165447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:19 compute-0 sudo[165447]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:19 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8001f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:59:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:19 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:19.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:20 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.515 165250 INFO neutron.common.config [-] Logging enabled!
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.516 165250 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.516 165250 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.516 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.516 165250 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.517 165250 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.517 165250 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.517 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.517 165250 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.517 165250 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.517 165250 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.517 165250 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.517 165250 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.518 165250 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.518 165250 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.518 165250 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.518 165250 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.518 165250 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.518 165250 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.518 165250 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.518 165250 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.518 165250 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.519 165250 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.519 165250 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.519 165250 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.519 165250 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.519 165250 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.519 165250 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.519 165250 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.519 165250 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.519 165250 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.520 165250 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.520 165250 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.520 165250 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.520 165250 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.520 165250 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.520 165250 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.520 165250 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.520 165250 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.521 165250 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.521 165250 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.521 165250 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.521 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.521 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.521 165250 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.521 165250 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.521 165250 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.521 165250 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.521 165250 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.522 165250 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.522 165250 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.522 165250 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.522 165250 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.522 165250 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.522 165250 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.522 165250 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.522 165250 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.522 165250 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.523 165250 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.523 165250 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.523 165250 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.523 165250 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.523 165250 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.523 165250 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.523 165250 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.523 165250 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.523 165250 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.524 165250 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.524 165250 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.524 165250 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.524 165250 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.524 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.524 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.524 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.524 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.524 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.525 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.525 165250 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.525 165250 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.525 165250 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.525 165250 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.525 165250 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.525 165250 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.525 165250 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.526 165250 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.526 165250 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.526 165250 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.526 165250 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.526 165250 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.526 165250 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.526 165250 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.526 165250 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.527 165250 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.527 165250 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.527 165250 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.527 165250 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.527 165250 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.527 165250 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.527 165250 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.527 165250 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.527 165250 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.528 165250 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.528 165250 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.528 165250 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.528 165250 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.528 165250 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.528 165250 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.528 165250 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.528 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.528 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.529 165250 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.529 165250 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.529 165250 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.529 165250 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.529 165250 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.529 165250 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.529 165250 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.529 165250 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.529 165250 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.530 165250 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.530 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.530 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.530 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.530 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.530 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.530 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.530 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.531 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.531 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.531 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.531 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.531 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.531 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.531 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.531 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.531 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.532 165250 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.532 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.532 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.532 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.532 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.532 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.532 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.532 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.532 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.533 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.533 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.533 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.533 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.533 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.533 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.533 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.533 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.533 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.534 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.534 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.534 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.534 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.534 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.534 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.534 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.534 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.534 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.535 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.535 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.535 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.535 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.535 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.535 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.535 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.535 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.536 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.536 165250 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.536 165250 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.536 165250 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.536 165250 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.536 165250 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.536 165250 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.537 165250 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.537 165250 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.537 165250 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.537 165250 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.537 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.537 165250 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.537 165250 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.537 165250 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.537 165250 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.538 165250 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.538 165250 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.538 165250 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.538 165250 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.538 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.538 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.538 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.538 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.538 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.539 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.539 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.539 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.539 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.539 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.539 165250 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.539 165250 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.539 165250 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.539 165250 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.539 165250 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.540 165250 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.540 165250 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.540 165250 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.540 165250 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.540 165250 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.540 165250 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.540 165250 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.540 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.540 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.540 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.541 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.541 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.541 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.541 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.541 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.541 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.541 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.541 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.541 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.542 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.542 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.542 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.542 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.542 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.542 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.542 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.542 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.542 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.542 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.543 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.543 165250 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.543 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.543 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.543 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.543 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.543 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.543 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.543 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.543 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.544 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.544 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.544 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.544 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.544 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.544 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.544 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.544 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.544 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.545 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.545 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.545 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.545 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.545 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.545 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.545 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.545 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.545 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.545 165250 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.546 165250 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.546 165250 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.546 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.546 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.546 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.546 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.546 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.546 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.546 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.546 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.547 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.547 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.547 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.547 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.547 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.547 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.547 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.547 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.547 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.548 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.548 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.548 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.548 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.548 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.548 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.548 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.548 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.548 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.548 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.549 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.549 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.549 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.549 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.549 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.549 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.549 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.549 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.549 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.549 165250 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.550 165250 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.559 165250 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.559 165250 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.559 165250 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.559 165250 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.559 165250 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 05 09:59:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:20.573 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 41643524-e4b6-4069-ba08-6e5872c74bd3 (UUID: 41643524-e4b6-4069-ba08-6e5872c74bd3) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 05 09:59:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:20.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.085 165250 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.184 165250 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.185 165250 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.185 165250 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.188 165250 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.199 165250 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.204 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '41643524-e4b6-4069-ba08-6e5872c74bd3'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7ffbdf76c910>], external_ids={}, name=41643524-e4b6-4069-ba08-6e5872c74bd3, nb_cfg_timestamp=1764928681426, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.205 165250 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7ffbdf75ef70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.206 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.206 165250 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.206 165250 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.206 165250 INFO oslo_service.service [-] Starting 1 workers
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.210 165250 DEBUG oslo_service.service [-] Started child 165509 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.213 165250 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpogjr1x1v/privsep.sock']
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.213 165509 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-952287'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.236 165509 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.236 165509 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.236 165509 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.239 165509 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.246 165509 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 05 09:59:21 compute-0 ceph-mon[74418]: pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Dec 05 09:59:21 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:59:21 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 09:59:21 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.252 165509 INFO eventlet.wsgi.server [-] (165509) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 05 09:59:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:21 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:21 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8001f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:21.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:21 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 05 09:59:22 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:22.088 165250 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 09:59:22 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:22.089 165250 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpogjr1x1v/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 09:59:22 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.833 165514 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 09:59:22 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.843 165514 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 09:59:22 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.846 165514 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 05 09:59:22 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:21.847 165514 INFO oslo.privsep.daemon [-] privsep daemon running as pid 165514
Dec 05 09:59:22 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:22.092 165514 DEBUG oslo.privsep.daemon [-] privsep: reply[02df3e1a-2bdf-40b3-ad5a-bb2f3fca2fcc]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 09:59:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:22 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:22 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:22.609 165514 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:59:22 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:22.609 165514 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:59:22 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:22.609 165514 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:59:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:22.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.235 165514 DEBUG oslo.privsep.daemon [-] privsep: reply[7588bf09-8f7b-49df-8b75-f09b39c28880]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.238 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, column=external_ids, values=({'neutron:ovn-metadata-id': 'dc4fac16-4a88-54f3-90b9-04c139f22579'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 09:59:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:23 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.459 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.735 165250 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.736 165250 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.736 165250 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.736 165250 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.736 165250 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.736 165250 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.737 165250 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.737 165250 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.737 165250 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.737 165250 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.737 165250 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.737 165250 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.737 165250 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.737 165250 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.738 165250 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.738 165250 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.738 165250 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.738 165250 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.738 165250 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.738 165250 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.738 165250 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.739 165250 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.739 165250 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.739 165250 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.739 165250 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.739 165250 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.740 165250 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.740 165250 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.740 165250 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.740 165250 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.740 165250 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.740 165250 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.740 165250 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.740 165250 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.741 165250 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.741 165250 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.741 165250 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.741 165250 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.741 165250 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.741 165250 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.741 165250 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.741 165250 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.742 165250 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.742 165250 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.742 165250 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.742 165250 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.742 165250 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.742 165250 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.742 165250 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.742 165250 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.743 165250 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.743 165250 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.743 165250 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.743 165250 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.743 165250 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.743 165250 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.743 165250 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.743 165250 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.744 165250 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.744 165250 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.744 165250 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.744 165250 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.744 165250 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.744 165250 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.744 165250 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.745 165250 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.745 165250 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.745 165250 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.745 165250 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.745 165250 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.745 165250 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.745 165250 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.746 165250 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.746 165250 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.746 165250 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.746 165250 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.746 165250 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.746 165250 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.746 165250 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.746 165250 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.747 165250 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.747 165250 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.747 165250 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.747 165250 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.747 165250 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.747 165250 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.747 165250 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.747 165250 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.748 165250 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.748 165250 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.748 165250 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.748 165250 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.748 165250 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.748 165250 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.748 165250 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.748 165250 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.749 165250 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.749 165250 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.749 165250 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.749 165250 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.749 165250 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.749 165250 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.749 165250 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.749 165250 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.749 165250 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.749 165250 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.750 165250 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.750 165250 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.750 165250 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.750 165250 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.750 165250 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.750 165250 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.751 165250 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.751 165250 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.751 165250 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.751 165250 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.751 165250 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.751 165250 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.751 165250 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.751 165250 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.752 165250 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.752 165250 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.752 165250 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.752 165250 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.752 165250 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.752 165250 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.752 165250 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.753 165250 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.753 165250 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.753 165250 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.753 165250 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.753 165250 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.753 165250 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.753 165250 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.753 165250 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.753 165250 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.754 165250 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.754 165250 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.754 165250 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.754 165250 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.754 165250 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.754 165250 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.755 165250 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.755 165250 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.755 165250 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.755 165250 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.755 165250 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.755 165250 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.755 165250 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.756 165250 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.756 165250 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.756 165250 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.756 165250 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.756 165250 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.756 165250 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.756 165250 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.756 165250 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.756 165250 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.757 165250 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.757 165250 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.757 165250 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.757 165250 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.757 165250 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.757 165250 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.757 165250 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.757 165250 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.757 165250 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.758 165250 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.758 165250 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.758 165250 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.758 165250 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.758 165250 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.758 165250 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.758 165250 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.759 165250 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.759 165250 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.759 165250 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.759 165250 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.759 165250 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.759 165250 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.759 165250 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.759 165250 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.760 165250 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.760 165250 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.760 165250 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.760 165250 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.760 165250 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.760 165250 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.760 165250 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.761 165250 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.761 165250 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.761 165250 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.761 165250 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.761 165250 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.761 165250 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.761 165250 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.761 165250 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.762 165250 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.762 165250 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.762 165250 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.762 165250 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.762 165250 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.762 165250 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.762 165250 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.763 165250 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.763 165250 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.763 165250 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.763 165250 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.763 165250 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.763 165250 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.763 165250 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.764 165250 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.764 165250 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.764 165250 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.764 165250 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.764 165250 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.764 165250 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.764 165250 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.765 165250 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.765 165250 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.765 165250 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.765 165250 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.765 165250 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.765 165250 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.765 165250 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.765 165250 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.765 165250 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.766 165250 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.766 165250 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.766 165250 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.766 165250 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.766 165250 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.766 165250 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.766 165250 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.766 165250 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.766 165250 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.767 165250 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.767 165250 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.767 165250 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.767 165250 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.767 165250 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.767 165250 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.767 165250 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.768 165250 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.768 165250 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.768 165250 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.768 165250 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.768 165250 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.768 165250 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.768 165250 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.768 165250 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.768 165250 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.769 165250 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.769 165250 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.769 165250 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.769 165250 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.769 165250 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.769 165250 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.770 165250 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.770 165250 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.770 165250 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.770 165250 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.770 165250 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.770 165250 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.770 165250 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.771 165250 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.771 165250 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.771 165250 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.771 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.771 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.771 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.772 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.772 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.772 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.772 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.772 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.772 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.772 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.773 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.773 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.773 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.773 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.773 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.773 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.773 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.773 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.774 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.774 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.774 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.774 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.774 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.774 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.774 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.774 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.774 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.775 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.775 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.775 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.775 165250 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.775 165250 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.775 165250 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.775 165250 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.775 165250 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:59:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 09:59:23.776 165250 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 09:59:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:23 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:23.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:24 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:24 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:59:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:24.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:59:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:25 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:25 compute-0 podman[165523]: 2025-12-05 09:59:25.550315357 +0000 UTC m=+0.133417322 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:59:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:25] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec 05 09:59:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:25] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec 05 09:59:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:25 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:25.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:26 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:59:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:26.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:59:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:26.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:59:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_09:59:27
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'vms', '.mgr', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.nfs']
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 09:59:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:59:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:59:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:59:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:59:27 compute-0 sudo[165551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:59:27 compute-0 sudo[165551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:27 compute-0 sudo[165551]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:59:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:59:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:27 compute-0 ceph-mon[74418]: pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Dec 05 09:59:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:59:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:27 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:59:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:27.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:28 compute-0 sshd-session[165577]: Accepted publickey for zuul from 192.168.122.30 port 50944 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 09:59:28 compute-0 systemd-logind[789]: New session 53 of user zuul.
Dec 05 09:59:28 compute-0 systemd[1]: Started Session 53 of User zuul.
Dec 05 09:59:28 compute-0 sshd-session[165577]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 09:59:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:28 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:28.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:28 compute-0 ceph-mon[74418]: pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Dec 05 09:59:28 compute-0 ceph-mon[74418]: pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:59:28 compute-0 ceph-mon[74418]: pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:59:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:59:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Dec 05 09:59:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:28.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:59:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:29 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:29 compute-0 python3.9[165731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:59:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:29 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:29 compute-0 ceph-mon[74418]: pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Dec 05 09:59:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:30 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:30 compute-0 sudo[165887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ereshapvgmeulcyyeyqgmxqfeiwzjoya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928770.1528044-62-32464935231740/AnsiballZ_command.py'
Dec 05 09:59:30 compute-0 sudo[165887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:30.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:30 compute-0 python3.9[165889]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:59:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 2 op/s
Dec 05 09:59:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:30 : epoch 6932abee : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 09:59:30 compute-0 sudo[165887]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:31 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:59:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:31 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16e400bfa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:31.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:31 compute-0 ceph-mon[74418]: pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 2 op/s
Dec 05 09:59:32 compute-0 sudo[166052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnhziectavlpcnabkgieqmsylotnwpsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928771.462467-95-66709814491081/AnsiballZ_systemd_service.py'
Dec 05 09:59:32 compute-0 sudo[166052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:32 compute-0 python3.9[166054]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:59:32 compute-0 systemd[1]: Reloading.
Dec 05 09:59:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:32 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:32 compute-0 systemd-rc-local-generator[166080]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:59:32 compute-0 systemd-sysv-generator[166085]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:59:32 compute-0 sudo[166052]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:32.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 2 op/s
Dec 05 09:59:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:33 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:33 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:33.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:33 compute-0 python3.9[166240]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:59:33 compute-0 network[166257]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:59:33 compute-0 network[166258]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:59:33 compute-0 network[166259]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:59:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:34 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:34 compute-0 ceph-mon[74418]: pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 2 op/s
Dec 05 09:59:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:34.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 09:59:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:35 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16d8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:35] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Dec 05 09:59:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:35] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Dec 05 09:59:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:35 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1708003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 05 09:59:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:35 compute-0 ceph-mon[74418]: pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 09:59:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095936 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 09:59:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[132172]: 05/12/2025 09:59:36 : epoch 6932abee : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f170c001ac0 fd 39 proxy ignored for local
Dec 05 09:59:36 compute-0 kernel: ganesha.nfsd[163911]: segfault at 50 ip 00007f17bf71232e sp 00007f1772ffc210 error 4 in libntirpc.so.5.8[7f17bf6f7000+2c000] likely on CPU 2 (core 0, socket 2)
Dec 05 09:59:36 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 05 09:59:36 compute-0 systemd[1]: Started Process Core Dump (PID 166328/UID 0).
Dec 05 09:59:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:59:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:36.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:59:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:36.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 09:59:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:36.977Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 09:59:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:36.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:59:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:37 compute-0 systemd-coredump[166329]: Process 132176 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 73:
                                                    #0  0x00007f17bf71232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 05 09:59:38 compute-0 systemd[1]: systemd-coredump@3-166328-0.service: Deactivated successfully.
Dec 05 09:59:38 compute-0 systemd[1]: systemd-coredump@3-166328-0.service: Consumed 1.431s CPU time.
Dec 05 09:59:38 compute-0 podman[166405]: 2025-12-05 09:59:38.132409203 +0000 UTC m=+0.030782273 container died 3fcd774447c8fdb0b4cc5052b6e5ad4014232ea2916b20ac49bdfb3817240861 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 09:59:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-815b348392f5bc9832b56e1eaab30ac28a2a7dbf1a7e06512b0553dbfdc38db1-merged.mount: Deactivated successfully.
Dec 05 09:59:38 compute-0 podman[166405]: 2025-12-05 09:59:38.535490909 +0000 UTC m=+0.433863969 container remove 3fcd774447c8fdb0b4cc5052b6e5ad4014232ea2916b20ac49bdfb3817240861 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:59:38 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Main process exited, code=exited, status=139/n/a
Dec 05 09:59:38 compute-0 ceph-mon[74418]: pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:59:38 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Failed with result 'exit-code'.
Dec 05 09:59:38 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 2.306s CPU time.
Dec 05 09:59:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:38.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:59:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:38.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:59:39 compute-0 sudo[166574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uasayohndfgdnaxtgrngfqxgvruqwvks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928778.8638513-152-245335866157358/AnsiballZ_systemd_service.py'
Dec 05 09:59:39 compute-0 sudo[166574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:39 compute-0 python3.9[166576]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:59:39 compute-0 sudo[166574]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:39 compute-0 ceph-mon[74418]: pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 09:59:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:39.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:39 compute-0 sudo[166727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkwtcsduqhteibssvoqtljoalmmcoapp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928779.6506414-152-274781231750228/AnsiballZ_systemd_service.py'
Dec 05 09:59:39 compute-0 sudo[166727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:40 compute-0 python3.9[166729]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:59:40 compute-0 sudo[166727]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:40 compute-0 sudo[166882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieownsijnwhakyleurlajdxxqbhwgcrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928780.351265-152-159243811612073/AnsiballZ_systemd_service.py'
Dec 05 09:59:40 compute-0 sudo[166882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:40.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Dec 05 09:59:40 compute-0 python3.9[166884]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:59:40 compute-0 sudo[166882]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:41 compute-0 sudo[167035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nohfhfnfjnhgbzujphogzkjmdjnndvlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928781.0776231-152-182438407927331/AnsiballZ_systemd_service.py'
Dec 05 09:59:41 compute-0 sudo[167035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:59:41 compute-0 python3.9[167037]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:59:41 compute-0 sudo[167035]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:41.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095942 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:59:42 compute-0 sudo[167188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgigqfsvwyvggfbnghxicnihsrtwdeek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928781.7916117-152-30711102351715/AnsiballZ_systemd_service.py'
Dec 05 09:59:42 compute-0 sudo[167188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:42 compute-0 python3.9[167190]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:59:42 compute-0 sudo[167188]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095942 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:59:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:42.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:42 compute-0 sudo[167343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpinsljsvayssfnweruxdtdbncnddtyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928782.523681-152-118164737872310/AnsiballZ_systemd_service.py'
Dec 05 09:59:42 compute-0 sudo[167343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Dec 05 09:59:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:59:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:59:42 compute-0 ceph-mon[74418]: pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Dec 05 09:59:43 compute-0 python3.9[167345]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:59:43 compute-0 sudo[167343]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:43 compute-0 sudo[167496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grnrkpdqpgmnmxvfpjbglgekvjvbfvkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928783.4162548-152-67013617130544/AnsiballZ_systemd_service.py'
Dec 05 09:59:43 compute-0 sudo[167496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:43.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:44 compute-0 python3.9[167498]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:59:44 compute-0 sudo[167496]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:44 compute-0 ceph-mon[74418]: pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Dec 05 09:59:44 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:59:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:44.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Dec 05 09:59:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:45] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 09:59:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:45] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 09:59:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:45.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:46 compute-0 sudo[167651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qruxqxyosfhvewjtglcefoprdyphwbeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928785.626852-308-80236695848176/AnsiballZ_file.py'
Dec 05 09:59:46 compute-0 sudo[167651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:46 compute-0 python3.9[167653]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:46 compute-0 sudo[167651]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:46 compute-0 ceph-mon[74418]: pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Dec 05 09:59:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:59:46 compute-0 sudo[167805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwjusoaqekpdiexietjauyrdxelhnvjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928786.3638616-308-259804053244294/AnsiballZ_file.py'
Dec 05 09:59:46 compute-0 sudo[167805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:46.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 05 09:59:46 compute-0 python3.9[167807]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:46 compute-0 sudo[167805]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:46.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:59:47 compute-0 sudo[167957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pulgclkfhqzmmdkffusxaajlrqrxbmow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928787.2557306-308-172576133988578/AnsiballZ_file.py'
Dec 05 09:59:47 compute-0 sudo[167957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:47 compute-0 python3.9[167959]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:47 compute-0 sudo[167957]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:47 compute-0 sudo[167960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 09:59:47 compute-0 sudo[167960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 09:59:47 compute-0 sudo[167960]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 09:59:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:47.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 09:59:48 compute-0 sudo[168135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdpbzoxbtrwbngsamcmkiqbnjclydkaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928787.863023-308-57693133788804/AnsiballZ_file.py'
Dec 05 09:59:48 compute-0 sudo[168135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:48 compute-0 ceph-mon[74418]: pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 05 09:59:48 compute-0 podman[168139]: 2025-12-05 09:59:48.420897535 +0000 UTC m=+0.082390587 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 09:59:48 compute-0 python3.9[168137]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:48 compute-0 sudo[168135]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:48 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Scheduled restart job, restart counter is at 4.
Dec 05 09:59:48 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:59:48 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 2.306s CPU time.
Dec 05 09:59:48 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 09:59:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:48.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:48 compute-0 sudo[168318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfmjjlsvckxajefdusbzfvhhdggufkuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928788.568567-308-123787237086530/AnsiballZ_file.py'
Dec 05 09:59:48 compute-0 sudo[168318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:59:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:48.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:59:48 compute-0 podman[168355]: 2025-12-05 09:59:48.946731011 +0000 UTC m=+0.039269356 container create 8ab60eb67dd7aac53c686233e020897e2dfda89edd71f5c454cc0418d6c97a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 09:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c253964fcd43cec8e04a95a2ae86cb0a8aa88e82cbafd9dfa3864596e1e214e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c253964fcd43cec8e04a95a2ae86cb0a8aa88e82cbafd9dfa3864596e1e214e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c253964fcd43cec8e04a95a2ae86cb0a8aa88e82cbafd9dfa3864596e1e214e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c253964fcd43cec8e04a95a2ae86cb0a8aa88e82cbafd9dfa3864596e1e214e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hocvro-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:59:48 compute-0 python3.9[168328]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:49 compute-0 podman[168355]: 2025-12-05 09:59:49.000318798 +0000 UTC m=+0.092857163 container init 8ab60eb67dd7aac53c686233e020897e2dfda89edd71f5c454cc0418d6c97a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 09:59:49 compute-0 podman[168355]: 2025-12-05 09:59:49.009058447 +0000 UTC m=+0.101596792 container start 8ab60eb67dd7aac53c686233e020897e2dfda89edd71f5c454cc0418d6c97a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 09:59:49 compute-0 bash[168355]: 8ab60eb67dd7aac53c686233e020897e2dfda89edd71f5c454cc0418d6c97a05
Dec 05 09:59:49 compute-0 podman[168355]: 2025-12-05 09:59:48.92915772 +0000 UTC m=+0.021696075 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 09:59:49 compute-0 sudo[168318]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:49 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 09:59:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 05 09:59:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 05 09:59:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 05 09:59:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 05 09:59:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 05 09:59:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 05 09:59:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 05 09:59:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 09:59:49 compute-0 sudo[168562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgnzcgwviflaryrheqtgjohwprxgdubs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928789.1264236-308-160332674266139/AnsiballZ_file.py'
Dec 05 09:59:49 compute-0 sudo[168562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:49 compute-0 python3.9[168564]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:49 compute-0 sudo[168562]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:49.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:49 compute-0 sudo[168714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugggkyxxmwjrqxibwuzlteiwxspuwuhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928789.696132-308-58981526084207/AnsiballZ_file.py'
Dec 05 09:59:49 compute-0 sudo[168714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:50 compute-0 python3.9[168716]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:50 compute-0 sudo[168714]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:50 compute-0 ceph-mon[74418]: pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:59:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:50.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:59:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:59:51 compute-0 sudo[168868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxbablvlyhkkvokfgzoxxkzwhirlyzjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928791.4707785-458-191023508457659/AnsiballZ_file.py'
Dec 05 09:59:51 compute-0 sudo[168868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:51.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:51 compute-0 python3.9[168870]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:51 compute-0 sudo[168868]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:52 compute-0 sudo[169022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yifuoooochciqcxlglngkhnsetoxiekx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928792.158198-458-2630064881308/AnsiballZ_file.py'
Dec 05 09:59:52 compute-0 sudo[169022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:52 compute-0 ceph-mon[74418]: pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:59:52 compute-0 python3.9[169024]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:52 compute-0 sudo[169022]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:52.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:59:52 compute-0 sudo[169174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukdhhlkdlecggkwuaxequzwcrfcjzmnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928792.742205-458-161526375217847/AnsiballZ_file.py'
Dec 05 09:59:52 compute-0 sudo[169174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:53 compute-0 python3.9[169176]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:53 compute-0 sudo[169174]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:53 compute-0 sudo[169326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eloektuzapbzecuanllesqbjmwahqtng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928793.3138108-458-199253982642787/AnsiballZ_file.py'
Dec 05 09:59:53 compute-0 sudo[169326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:53 compute-0 python3.9[169328]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:53 compute-0 sudo[169326]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:53.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:54 compute-0 sudo[169479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwhkeplckplobuzqflrfpokovffeasae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928793.920903-458-52517899056942/AnsiballZ_file.py'
Dec 05 09:59:54 compute-0 sudo[169479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:54 compute-0 python3.9[169481]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:54 compute-0 sudo[169479]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:54 compute-0 ceph-mon[74418]: pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 05 09:59:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:54.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:54 compute-0 sudo[169632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvwntotzclumglopuozwywqldyrybkkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928794.5083833-458-240332722436940/AnsiballZ_file.py'
Dec 05 09:59:54 compute-0 sudo[169632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 05 09:59:55 compute-0 python3.9[169634]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:55 compute-0 sudo[169632]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:55 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 09:59:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:55 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 09:59:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:55 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 09:59:55 compute-0 sudo[169784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqzvsovjiopxrcqugspqbbzlgzvazgzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928795.365568-458-73656357699663/AnsiballZ_file.py'
Dec 05 09:59:55 compute-0 sudo[169784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:55] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 09:59:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:09:59:55] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 09:59:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/095955 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 09:59:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [NOTICE] 338/095955 (4) : haproxy version is 2.3.17-d1c9119
Dec 05 09:59:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [NOTICE] 338/095955 (4) : path to executable is /usr/local/sbin/haproxy
Dec 05 09:59:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [ALERT] 338/095955 (4) : backend 'backend' has no server available!
Dec 05 09:59:55 compute-0 podman[169786]: 2025-12-05 09:59:55.726717419 +0000 UTC m=+0.094451336 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 09:59:55 compute-0 python3.9[169787]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:59:55 compute-0 sudo[169784]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:55.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:56 compute-0 ceph-mon[74418]: pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 05 09:59:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 09:59:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:56.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 05 09:59:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:56.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:59:57 compute-0 sudo[169965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppurthxumjesuweylstqadzrbojswsza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928797.124957-611-138518621909741/AnsiballZ_command.py'
Dec 05 09:59:57 compute-0 sudo[169965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 09:59:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:59:57 compute-0 python3.9[169967]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:59:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:57 compute-0 sudo[169965]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 09:59:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 09:59:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:57.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 09:59:58 compute-0 python3.9[170121]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:59:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:09:59:58.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Dec 05 09:59:58 compute-0 ceph-mon[74418]: pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 05 09:59:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T09:59:58.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 09:59:59 compute-0 sudo[170271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hekovuaqdxmjimfubmwqlabjwazymmbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928798.9944756-665-110371986990310/AnsiballZ_systemd_service.py'
Dec 05 09:59:59 compute-0 sudo[170271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:59:59 compute-0 python3.9[170273]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:59:59 compute-0 systemd[1]: Reloading.
Dec 05 09:59:59 compute-0 systemd-rc-local-generator[170298]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:59:59 compute-0 systemd-sysv-generator[170303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:59:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 09:59:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 09:59:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:09:59:59.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 09:59:59 compute-0 ceph-mon[74418]: pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Dec 05 10:00:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Dec 05 10:00:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:00:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.0 observed slow operation indications in BlueStore
Dec 05 10:00:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Dec 05 10:00:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec 05 10:00:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.qiwwqr on compute-1 is in unknown state
Dec 05 10:00:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:59 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:00:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:59 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:00:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 09:59:59 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:00:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:00 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:00:00 compute-0 sudo[170271]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:00 compute-0 sudo[170460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztbwynczfdnedsqdotwujseelcvckpjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928800.3487465-689-155778882206001/AnsiballZ_command.py'
Dec 05 10:00:00 compute-0 sudo[170460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:00.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:00 compute-0 python3.9[170462]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:00:00 compute-0 sudo[170460]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 340 B/s wr, 1 op/s
Dec 05 10:00:00 compute-0 ceph-mon[74418]: Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Dec 05 10:00:00 compute-0 ceph-mon[74418]: [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:00:00 compute-0 ceph-mon[74418]:      osd.0 observed slow operation indications in BlueStore
Dec 05 10:00:00 compute-0 ceph-mon[74418]:      osd.1 observed slow operation indications in BlueStore
Dec 05 10:00:00 compute-0 ceph-mon[74418]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec 05 10:00:00 compute-0 ceph-mon[74418]:     daemon nfs.cephfs.0.0.compute-1.qiwwqr on compute-1 is in unknown state
Dec 05 10:00:01 compute-0 sudo[170613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkjhdgqmeigwbdnpxqblxjnysbwbkntp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928801.0022779-689-257294941131984/AnsiballZ_command.py'
Dec 05 10:00:01 compute-0 sudo[170613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:01 compute-0 python3.9[170615]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:00:01 compute-0 sudo[170613]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:01.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:01 compute-0 sudo[170766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pboalvhpwgkzgpdybsmuupseisidrgdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928801.6791785-689-50040566570584/AnsiballZ_command.py'
Dec 05 10:00:01 compute-0 sudo[170766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:01 compute-0 ceph-mon[74418]: pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 340 B/s wr, 1 op/s
Dec 05 10:00:02 compute-0 python3.9[170768]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:00:02 compute-0 sudo[170766]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:02 compute-0 sudo[170921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niereidydbuvkobrockgcivoeiknjwxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928802.2908463-689-45811321482836/AnsiballZ_command.py'
Dec 05 10:00:02 compute-0 sudo[170921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:02 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:00:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:02 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:00:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:02 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:00:02 compute-0 python3.9[170923]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:00:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:02.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:02 compute-0 sudo[170921]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 340 B/s wr, 1 op/s
Dec 05 10:00:03 compute-0 sudo[171074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzibsszaexdbgpizitbgssoelvcslgzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928802.9260116-689-95763977040557/AnsiballZ_command.py'
Dec 05 10:00:03 compute-0 sudo[171074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:03 compute-0 python3.9[171076]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:00:03 compute-0 sudo[171074]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:03.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:04 compute-0 sudo[171228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqyogjumeojrxpnkppdharqvsoxxoetd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928803.7962012-689-269449857820954/AnsiballZ_command.py'
Dec 05 10:00:04 compute-0 sudo[171228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:04 compute-0 ceph-mon[74418]: pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 340 B/s wr, 1 op/s
Dec 05 10:00:04 compute-0 python3.9[171230]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:00:04 compute-0 sudo[171228]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:04 compute-0 sudo[171382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbubphxmzvxniivyvbrrxochnyhyfhew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928804.4393802-689-34883860806098/AnsiballZ_command.py'
Dec 05 10:00:04 compute-0 sudo[171382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:04.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 681 B/s wr, 3 op/s
Dec 05 10:00:04 compute-0 python3.9[171384]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:00:04 compute-0 sudo[171382]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:05] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:00:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:05] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:00:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:05.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:06 compute-0 ceph-mon[74418]: pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 681 B/s wr, 3 op/s
Dec 05 10:00:06 compute-0 sudo[171537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hazxsghgxhoooolhvtuklggbpyeetywa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928805.9112425-851-59660991653671/AnsiballZ_getent.py'
Dec 05 10:00:06 compute-0 sudo[171537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:06 compute-0 python3.9[171539]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 05 10:00:06 compute-0 sudo[171537]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:06.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 340 B/s wr, 2 op/s
Dec 05 10:00:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:06.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:00:07 compute-0 sudo[171690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udghponrsxxjljbzvgsygceecvunbohm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928806.8461745-875-170358164442832/AnsiballZ_group.py'
Dec 05 10:00:07 compute-0 sudo[171690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:07 compute-0 python3.9[171692]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 10:00:07 compute-0 groupadd[171693]: group added to /etc/group: name=libvirt, GID=42473
Dec 05 10:00:07 compute-0 groupadd[171693]: group added to /etc/gshadow: name=libvirt
Dec 05 10:00:07 compute-0 groupadd[171693]: new group: name=libvirt, GID=42473
Dec 05 10:00:07 compute-0 sudo[171690]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:07 compute-0 sudo[171723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:00:07 compute-0 sudo[171723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:07 compute-0 sudo[171723]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:07.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100008 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:00:08 compute-0 ceph-mon[74418]: pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 340 B/s wr, 2 op/s
Dec 05 10:00:08 compute-0 sudo[171875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdymekpljezebcszimmrpdedojizhmwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928807.8620975-899-257044885820995/AnsiballZ_user.py'
Dec 05 10:00:08 compute-0 sudo[171875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:08 compute-0 python3.9[171877]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 10:00:08 compute-0 useradd[171879]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 05 10:00:08 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:00:08 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:00:08 compute-0 sudo[171875]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 10:00:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:08.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 10:00:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 681 B/s wr, 3 op/s
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:08.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:00:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:08.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:00:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:09 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:09 compute-0 sudo[172051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzqrtnznejvrnsglwrjazmwcpalggvod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928809.134939-932-251784412789624/AnsiballZ_setup.py'
Dec 05 10:00:09 compute-0 sudo[172051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:09 compute-0 python3.9[172053]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 10:00:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:09 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a40016e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:09.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:09 compute-0 sudo[172051]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:10 compute-0 sudo[172137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlsdxlzwqqebnaufvxgorktcqqlocray ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928809.134939-932-251784412789624/AnsiballZ_dnf.py'
Dec 05 10:00:10 compute-0 sudo[172137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:00:10 compute-0 ceph-mon[74418]: pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 681 B/s wr, 3 op/s
Dec 05 10:00:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:10 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:10 compute-0 python3.9[172139]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 10:00:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:10.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 681 B/s wr, 2 op/s
Dec 05 10:00:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:11 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:11 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:00:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:11 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:00:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:11 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:11.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:12 compute-0 ceph-mon[74418]: pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 681 B/s wr, 2 op/s
Dec 05 10:00:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100012 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:00:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:12 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:00:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:00:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:12.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:00:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:13 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0900016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:00:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:13 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:13.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:14 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:14 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:00:14 compute-0 ceph-mon[74418]: pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:00:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:14.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec 05 10:00:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:15 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:15] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 10:00:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:15] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 10:00:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:15 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0900016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 10:00:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 10:00:16 compute-0 ceph-mon[74418]: pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec 05 10:00:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:16 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 852 B/s wr, 3 op/s
Dec 05 10:00:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:16.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:00:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:16.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:00:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:16.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:00:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:17 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100017 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:00:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:17 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:18 compute-0 ceph-mon[74418]: pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 852 B/s wr, 3 op/s
Dec 05 10:00:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:18 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:18.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 852 B/s wr, 3 op/s
Dec 05 10:00:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:18.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:00:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:19 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:19 compute-0 podman[172207]: 2025-12-05 10:00:19.443294277 +0000 UTC m=+0.092998807 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:00:19 compute-0 sudo[172235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:19 compute-0 sudo[172235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:19 compute-0 sudo[172235]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 compute-0 sudo[172262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:00:19 compute-0 sudo[172262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:19 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:19.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:20 compute-0 sudo[172262]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:00:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:00:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:00:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:00:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 539 B/s wr, 2 op/s
Dec 05 10:00:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:00:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:00:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:00:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:00:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:00:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:00:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:00:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:00:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:00:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:00:20 compute-0 ceph-mon[74418]: pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 852 B/s wr, 3 op/s
Dec 05 10:00:20 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:00:20 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:00:20 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:00:20 compute-0 sudo[172348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:20 compute-0 sudo[172348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:20 compute-0 sudo[172348]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:20 compute-0 sudo[172376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:00:20 compute-0 sudo[172376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:20 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0900016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:00:20.552 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:00:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:00:20.553 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:00:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:00:20.553 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:00:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:20.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:20 compute-0 podman[172458]: 2025-12-05 10:00:20.830277879 +0000 UTC m=+0.050657777 container create ed5c13e6994373c817a13fb3926d5af803449d2a089889601c447ecfd052f394 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_engelbart, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 05 10:00:20 compute-0 systemd[1]: Started libpod-conmon-ed5c13e6994373c817a13fb3926d5af803449d2a089889601c447ecfd052f394.scope.
Dec 05 10:00:20 compute-0 podman[172458]: 2025-12-05 10:00:20.807144315 +0000 UTC m=+0.027524223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:00:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:00:20 compute-0 podman[172458]: 2025-12-05 10:00:20.932734714 +0000 UTC m=+0.153114622 container init ed5c13e6994373c817a13fb3926d5af803449d2a089889601c447ecfd052f394 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:00:20 compute-0 podman[172458]: 2025-12-05 10:00:20.94758307 +0000 UTC m=+0.167962988 container start ed5c13e6994373c817a13fb3926d5af803449d2a089889601c447ecfd052f394 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_engelbart, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 05 10:00:20 compute-0 blissful_engelbart[172478]: 167 167
Dec 05 10:00:20 compute-0 systemd[1]: libpod-ed5c13e6994373c817a13fb3926d5af803449d2a089889601c447ecfd052f394.scope: Deactivated successfully.
Dec 05 10:00:20 compute-0 conmon[172478]: conmon ed5c13e6994373c817a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed5c13e6994373c817a13fb3926d5af803449d2a089889601c447ecfd052f394.scope/container/memory.events
Dec 05 10:00:20 compute-0 podman[172458]: 2025-12-05 10:00:20.964077572 +0000 UTC m=+0.184457450 container attach ed5c13e6994373c817a13fb3926d5af803449d2a089889601c447ecfd052f394 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_engelbart, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:00:20 compute-0 podman[172458]: 2025-12-05 10:00:20.964925406 +0000 UTC m=+0.185305284 container died ed5c13e6994373c817a13fb3926d5af803449d2a089889601c447ecfd052f394 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_engelbart, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-59884c8494cfed8af8c6e6699a6d069c30c059977689dcaa46f38ff588fba338-merged.mount: Deactivated successfully.
Dec 05 10:00:21 compute-0 podman[172458]: 2025-12-05 10:00:21.010433921 +0000 UTC m=+0.230813809 container remove ed5c13e6994373c817a13fb3926d5af803449d2a089889601c447ecfd052f394 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_engelbart, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 10:00:21 compute-0 systemd[1]: libpod-conmon-ed5c13e6994373c817a13fb3926d5af803449d2a089889601c447ecfd052f394.scope: Deactivated successfully.
Dec 05 10:00:21 compute-0 podman[172512]: 2025-12-05 10:00:21.1922986 +0000 UTC m=+0.044862189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:00:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:21 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0003ea0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:21 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:00:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:21.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:00:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 539 B/s wr, 2 op/s
Dec 05 10:00:22 compute-0 podman[172512]: 2025-12-05 10:00:22.37355926 +0000 UTC m=+1.226122859 container create 71bfbcadf79ec8de78dac6070fa851e2c25ff7d89e7d3e5a964ade0fef1f0d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:00:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:22 compute-0 ceph-mon[74418]: pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 539 B/s wr, 2 op/s
Dec 05 10:00:22 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:00:22 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:00:22 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:00:22 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:00:22 compute-0 systemd[1]: Started libpod-conmon-71bfbcadf79ec8de78dac6070fa851e2c25ff7d89e7d3e5a964ade0fef1f0d6c.scope.
Dec 05 10:00:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bbc9da5a6559a4c6f2746c7a3d8d22afcf280b3caaaab0b8a0ccd8d92c4bb74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bbc9da5a6559a4c6f2746c7a3d8d22afcf280b3caaaab0b8a0ccd8d92c4bb74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bbc9da5a6559a4c6f2746c7a3d8d22afcf280b3caaaab0b8a0ccd8d92c4bb74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bbc9da5a6559a4c6f2746c7a3d8d22afcf280b3caaaab0b8a0ccd8d92c4bb74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bbc9da5a6559a4c6f2746c7a3d8d22afcf280b3caaaab0b8a0ccd8d92c4bb74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:22 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:22 compute-0 podman[172512]: 2025-12-05 10:00:22.4670393 +0000 UTC m=+1.319602859 container init 71bfbcadf79ec8de78dac6070fa851e2c25ff7d89e7d3e5a964ade0fef1f0d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:00:22 compute-0 podman[172512]: 2025-12-05 10:00:22.47580961 +0000 UTC m=+1.328373169 container start 71bfbcadf79ec8de78dac6070fa851e2c25ff7d89e7d3e5a964ade0fef1f0d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:00:22 compute-0 podman[172512]: 2025-12-05 10:00:22.479926002 +0000 UTC m=+1.332489591 container attach 71bfbcadf79ec8de78dac6070fa851e2c25ff7d89e7d3e5a964ade0fef1f0d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:00:22 compute-0 objective_sammet[172580]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:00:22 compute-0 objective_sammet[172580]: --> All data devices are unavailable
Dec 05 10:00:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:22.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:22 compute-0 systemd[1]: libpod-71bfbcadf79ec8de78dac6070fa851e2c25ff7d89e7d3e5a964ade0fef1f0d6c.scope: Deactivated successfully.
Dec 05 10:00:22 compute-0 podman[172512]: 2025-12-05 10:00:22.859531515 +0000 UTC m=+1.712095074 container died 71bfbcadf79ec8de78dac6070fa851e2c25ff7d89e7d3e5a964ade0fef1f0d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sammet, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 05 10:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bbc9da5a6559a4c6f2746c7a3d8d22afcf280b3caaaab0b8a0ccd8d92c4bb74-merged.mount: Deactivated successfully.
Dec 05 10:00:22 compute-0 podman[172512]: 2025-12-05 10:00:22.907025225 +0000 UTC m=+1.759588784 container remove 71bfbcadf79ec8de78dac6070fa851e2c25ff7d89e7d3e5a964ade0fef1f0d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 10:00:22 compute-0 systemd[1]: libpod-conmon-71bfbcadf79ec8de78dac6070fa851e2c25ff7d89e7d3e5a964ade0fef1f0d6c.scope: Deactivated successfully.
Dec 05 10:00:22 compute-0 sudo[172376]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:23 compute-0 sudo[172618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:23 compute-0 sudo[172618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:23 compute-0 sudo[172618]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:23 compute-0 sudo[172643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:00:23 compute-0 sudo[172643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:23 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:23 compute-0 podman[172709]: 2025-12-05 10:00:23.440724546 +0000 UTC m=+0.038600737 container create 52d756a83df0bbcfaf9af1661c68b09d4d4978f6dacd1b50cead0eb5fc6e9121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cori, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:00:23 compute-0 ceph-mon[74418]: pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 539 B/s wr, 2 op/s
Dec 05 10:00:23 compute-0 systemd[1]: Started libpod-conmon-52d756a83df0bbcfaf9af1661c68b09d4d4978f6dacd1b50cead0eb5fc6e9121.scope.
Dec 05 10:00:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:00:23 compute-0 podman[172709]: 2025-12-05 10:00:23.425361636 +0000 UTC m=+0.023237837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:00:23 compute-0 podman[172709]: 2025-12-05 10:00:23.531682037 +0000 UTC m=+0.129558238 container init 52d756a83df0bbcfaf9af1661c68b09d4d4978f6dacd1b50cead0eb5fc6e9121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cori, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:00:23 compute-0 podman[172709]: 2025-12-05 10:00:23.537731612 +0000 UTC m=+0.135607793 container start 52d756a83df0bbcfaf9af1661c68b09d4d4978f6dacd1b50cead0eb5fc6e9121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cori, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:00:23 compute-0 podman[172709]: 2025-12-05 10:00:23.540886138 +0000 UTC m=+0.138762349 container attach 52d756a83df0bbcfaf9af1661c68b09d4d4978f6dacd1b50cead0eb5fc6e9121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:00:23 compute-0 pensive_cori[172726]: 167 167
Dec 05 10:00:23 compute-0 systemd[1]: libpod-52d756a83df0bbcfaf9af1661c68b09d4d4978f6dacd1b50cead0eb5fc6e9121.scope: Deactivated successfully.
Dec 05 10:00:23 compute-0 podman[172709]: 2025-12-05 10:00:23.54239661 +0000 UTC m=+0.140272791 container died 52d756a83df0bbcfaf9af1661c68b09d4d4978f6dacd1b50cead0eb5fc6e9121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cori, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 10:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0038abb7c339b73d7d44d44d8773c247619b09b0e4732f661a96f1e2a1a7e289-merged.mount: Deactivated successfully.
Dec 05 10:00:23 compute-0 podman[172709]: 2025-12-05 10:00:23.579171277 +0000 UTC m=+0.177047458 container remove 52d756a83df0bbcfaf9af1661c68b09d4d4978f6dacd1b50cead0eb5fc6e9121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cori, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 10:00:23 compute-0 systemd[1]: libpod-conmon-52d756a83df0bbcfaf9af1661c68b09d4d4978f6dacd1b50cead0eb5fc6e9121.scope: Deactivated successfully.
Dec 05 10:00:23 compute-0 podman[172751]: 2025-12-05 10:00:23.751004551 +0000 UTC m=+0.061087844 container create d13049d51b3d790926198f1a7d38d07a8f7fc13b1490b8e5547dcadf98b13d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gates, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 10:00:23 compute-0 systemd[1]: Started libpod-conmon-d13049d51b3d790926198f1a7d38d07a8f7fc13b1490b8e5547dcadf98b13d5f.scope.
Dec 05 10:00:23 compute-0 podman[172751]: 2025-12-05 10:00:23.72321765 +0000 UTC m=+0.033300963 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:00:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c864685cd25a94ff4dbd0165b48040e71bbf05509eb4cdbb9cd1015f3a9fe16a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c864685cd25a94ff4dbd0165b48040e71bbf05509eb4cdbb9cd1015f3a9fe16a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c864685cd25a94ff4dbd0165b48040e71bbf05509eb4cdbb9cd1015f3a9fe16a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c864685cd25a94ff4dbd0165b48040e71bbf05509eb4cdbb9cd1015f3a9fe16a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:23 compute-0 podman[172751]: 2025-12-05 10:00:23.840647485 +0000 UTC m=+0.150730768 container init d13049d51b3d790926198f1a7d38d07a8f7fc13b1490b8e5547dcadf98b13d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:00:23 compute-0 podman[172751]: 2025-12-05 10:00:23.849294902 +0000 UTC m=+0.159378185 container start d13049d51b3d790926198f1a7d38d07a8f7fc13b1490b8e5547dcadf98b13d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 10:00:23 compute-0 podman[172751]: 2025-12-05 10:00:23.853754754 +0000 UTC m=+0.163838057 container attach d13049d51b3d790926198f1a7d38d07a8f7fc13b1490b8e5547dcadf98b13d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gates, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:00:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:23 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0003ea0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:23.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:24 compute-0 pedantic_gates[172768]: {
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:     "1": [
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:         {
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             "devices": [
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "/dev/loop3"
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             ],
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             "lv_name": "ceph_lv0",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             "lv_size": "21470642176",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             "name": "ceph_lv0",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             "tags": {
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.cluster_name": "ceph",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.crush_device_class": "",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.encrypted": "0",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.osd_id": "1",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.type": "block",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.vdo": "0",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:                 "ceph.with_tpm": "0"
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             },
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             "type": "block",
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:             "vg_name": "ceph_vg0"
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:         }
Dec 05 10:00:24 compute-0 pedantic_gates[172768]:     ]
Dec 05 10:00:24 compute-0 pedantic_gates[172768]: }
Dec 05 10:00:24 compute-0 systemd[1]: libpod-d13049d51b3d790926198f1a7d38d07a8f7fc13b1490b8e5547dcadf98b13d5f.scope: Deactivated successfully.
Dec 05 10:00:24 compute-0 podman[172751]: 2025-12-05 10:00:24.143335352 +0000 UTC m=+0.453418645 container died d13049d51b3d790926198f1a7d38d07a8f7fc13b1490b8e5547dcadf98b13d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Dec 05 10:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c864685cd25a94ff4dbd0165b48040e71bbf05509eb4cdbb9cd1015f3a9fe16a-merged.mount: Deactivated successfully.
Dec 05 10:00:24 compute-0 podman[172751]: 2025-12-05 10:00:24.188615901 +0000 UTC m=+0.498699184 container remove d13049d51b3d790926198f1a7d38d07a8f7fc13b1490b8e5547dcadf98b13d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gates, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 10:00:24 compute-0 systemd[1]: libpod-conmon-d13049d51b3d790926198f1a7d38d07a8f7fc13b1490b8e5547dcadf98b13d5f.scope: Deactivated successfully.
Dec 05 10:00:24 compute-0 sudo[172643]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 539 B/s wr, 2 op/s
Dec 05 10:00:24 compute-0 sudo[172788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:24 compute-0 sudo[172788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:24 compute-0 sudo[172788]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:24 compute-0 sudo[172814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:00:24 compute-0 sudo[172814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:24 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:24 compute-0 podman[172878]: 2025-12-05 10:00:24.771244362 +0000 UTC m=+0.042589667 container create 26682d37287be27eba5f3e24e55f823e2b8d0d46b44c6067af8d3562bcce872a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wescoff, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 10:00:24 compute-0 systemd[1]: Started libpod-conmon-26682d37287be27eba5f3e24e55f823e2b8d0d46b44c6067af8d3562bcce872a.scope.
Dec 05 10:00:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:24.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:24 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:00:24 compute-0 podman[172878]: 2025-12-05 10:00:24.750289779 +0000 UTC m=+0.021635114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:00:24 compute-0 podman[172878]: 2025-12-05 10:00:24.858026398 +0000 UTC m=+0.129371753 container init 26682d37287be27eba5f3e24e55f823e2b8d0d46b44c6067af8d3562bcce872a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wescoff, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:00:24 compute-0 podman[172878]: 2025-12-05 10:00:24.864413654 +0000 UTC m=+0.135758979 container start 26682d37287be27eba5f3e24e55f823e2b8d0d46b44c6067af8d3562bcce872a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wescoff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 10:00:24 compute-0 podman[172878]: 2025-12-05 10:00:24.868016932 +0000 UTC m=+0.139362297 container attach 26682d37287be27eba5f3e24e55f823e2b8d0d46b44c6067af8d3562bcce872a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wescoff, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:00:24 compute-0 gallant_wescoff[172894]: 167 167
Dec 05 10:00:24 compute-0 systemd[1]: libpod-26682d37287be27eba5f3e24e55f823e2b8d0d46b44c6067af8d3562bcce872a.scope: Deactivated successfully.
Dec 05 10:00:24 compute-0 conmon[172894]: conmon 26682d37287be27eba5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26682d37287be27eba5f3e24e55f823e2b8d0d46b44c6067af8d3562bcce872a.scope/container/memory.events
Dec 05 10:00:24 compute-0 podman[172878]: 2025-12-05 10:00:24.871061845 +0000 UTC m=+0.142407170 container died 26682d37287be27eba5f3e24e55f823e2b8d0d46b44c6067af8d3562bcce872a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-790208a06c2ef3e498d98a049c7a581d415629064439365561fe67b956fd18ca-merged.mount: Deactivated successfully.
Dec 05 10:00:24 compute-0 podman[172878]: 2025-12-05 10:00:24.911915954 +0000 UTC m=+0.183261259 container remove 26682d37287be27eba5f3e24e55f823e2b8d0d46b44c6067af8d3562bcce872a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 10:00:24 compute-0 systemd[1]: libpod-conmon-26682d37287be27eba5f3e24e55f823e2b8d0d46b44c6067af8d3562bcce872a.scope: Deactivated successfully.
Dec 05 10:00:25 compute-0 podman[172917]: 2025-12-05 10:00:25.085385153 +0000 UTC m=+0.049339292 container create b6e16c9d6c55850292a8287b313585e0c4973bcac17b7dbad731ad029be5afd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 10:00:25 compute-0 systemd[1]: Started libpod-conmon-b6e16c9d6c55850292a8287b313585e0c4973bcac17b7dbad731ad029be5afd0.scope.
Dec 05 10:00:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c601dbf06e2be599a227b196cfc467a85ec646cd5df7ae4ca2fddabdda387f3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c601dbf06e2be599a227b196cfc467a85ec646cd5df7ae4ca2fddabdda387f3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c601dbf06e2be599a227b196cfc467a85ec646cd5df7ae4ca2fddabdda387f3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c601dbf06e2be599a227b196cfc467a85ec646cd5df7ae4ca2fddabdda387f3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:00:25 compute-0 podman[172917]: 2025-12-05 10:00:25.061394376 +0000 UTC m=+0.025348545 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:00:25 compute-0 podman[172917]: 2025-12-05 10:00:25.164786787 +0000 UTC m=+0.128740936 container init b6e16c9d6c55850292a8287b313585e0c4973bcac17b7dbad731ad029be5afd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:00:25 compute-0 podman[172917]: 2025-12-05 10:00:25.170478083 +0000 UTC m=+0.134432212 container start b6e16c9d6c55850292a8287b313585e0c4973bcac17b7dbad731ad029be5afd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 10:00:25 compute-0 podman[172917]: 2025-12-05 10:00:25.174498872 +0000 UTC m=+0.138453001 container attach b6e16c9d6c55850292a8287b313585e0c4973bcac17b7dbad731ad029be5afd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:00:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:25 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:25] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 10:00:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:25] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 10:00:25 compute-0 lvm[173014]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:00:25 compute-0 lvm[173014]: VG ceph_vg0 finished
Dec 05 10:00:25 compute-0 reverent_colden[172934]: {}
Dec 05 10:00:25 compute-0 systemd[1]: libpod-b6e16c9d6c55850292a8287b313585e0c4973bcac17b7dbad731ad029be5afd0.scope: Deactivated successfully.
Dec 05 10:00:25 compute-0 systemd[1]: libpod-b6e16c9d6c55850292a8287b313585e0c4973bcac17b7dbad731ad029be5afd0.scope: Consumed 1.194s CPU time.
Dec 05 10:00:25 compute-0 podman[172917]: 2025-12-05 10:00:25.899761638 +0000 UTC m=+0.863715787 container died b6e16c9d6c55850292a8287b313585e0c4973bcac17b7dbad731ad029be5afd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 10:00:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:25 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:25.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 179 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:00:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:26 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0003ea0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:26.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:26 compute-0 ceph-mon[74418]: pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 539 B/s wr, 2 op/s
Dec 05 10:00:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:26.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:00:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:26.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:00:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:26.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:00:27 compute-0 podman[173007]: 2025-12-05 10:00:27.148852526 +0000 UTC m=+1.342738312 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:00:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c601dbf06e2be599a227b196cfc467a85ec646cd5df7ae4ca2fddabdda387f3c-merged.mount: Deactivated successfully.
Dec 05 10:00:27 compute-0 podman[172917]: 2025-12-05 10:00:27.294438751 +0000 UTC m=+2.258392890 container remove b6e16c9d6c55850292a8287b313585e0c4973bcac17b7dbad731ad029be5afd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 10:00:27 compute-0 systemd[1]: libpod-conmon-b6e16c9d6c55850292a8287b313585e0c4973bcac17b7dbad731ad029be5afd0.scope: Deactivated successfully.
Dec 05 10:00:27 compute-0 sudo[172814]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:00:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:27 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:27 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:00:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:00:27 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:00:27
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'backups', '.nfs', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.mgr']
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:00:27 compute-0 sudo[173055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:00:27 compute-0 sudo[173055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:27 compute-0 sudo[173055]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:00:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:00:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:00:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:27 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:27.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:27 compute-0 sudo[173080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:00:27 compute-0 sudo[173080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:27 compute-0 sudo[173080]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:27 compute-0 ceph-mon[74418]: pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 179 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:00:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:00:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:00:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:00:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 179 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:00:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:28 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:28.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:28.916Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:00:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:28.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:00:28 compute-0 ceph-mon[74418]: pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 179 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:00:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:29 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:29 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Check health
Dec 05 10:00:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:29 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:29 compute-0 ceph-osd[82677]: bluestore.MempoolThread fragmentation_score=0.000028 took=0.000206s
Dec 05 10:00:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:29.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Dec 05 10:00:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:30 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:30.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:31 compute-0 ceph-mon[74418]: pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Dec 05 10:00:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:31 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:31 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:31.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:32 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:32.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:33 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:33 compute-0 ceph-mon[74418]: pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:33 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:33.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:00:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:34 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:34.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:34 compute-0 ceph-mon[74418]: pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:00:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:35 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:35] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec 05 10:00:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:35] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec 05 10:00:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:35 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:35.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:36 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:36.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:36.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:00:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:37 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:37 compute-0 ceph-mon[74418]: pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:37 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:37.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:38 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:38.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:38.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:00:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:39 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:39 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:39.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:00:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:40 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:40.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:41 compute-0 ceph-mon[74418]: pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:41 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:41 compute-0 kernel: SELinux:  Converting 2776 SID table entries...
Dec 05 10:00:41 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 10:00:41 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 10:00:41 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 10:00:41 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 10:00:41 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 10:00:41 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 10:00:41 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 10:00:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:41 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:41.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:42 compute-0 ceph-mon[74418]: pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:00:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:42 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:00:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:00:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:42.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:43 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:43 compute-0 ceph-mon[74418]: pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:00:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:43 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 10:00:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:43.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 10:00:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:00:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:44 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0840016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:44.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:45 compute-0 ceph-mon[74418]: pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:00:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:45 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:45] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:00:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:45] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:00:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:45 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4001e00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:00:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:45.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:00:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:46 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:46.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:46.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:00:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:46.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:00:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:47 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0840016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:47 compute-0 ceph-mon[74418]: pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:47 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:47.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:48 compute-0 sudo[173141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:00:48 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 05 10:00:48 compute-0 sudo[173141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:00:48 compute-0 sudo[173141]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:48 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4001e00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:48.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:48.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:00:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:49 compute-0 ceph-mon[74418]: pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0840016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:49.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:00:50 compute-0 podman[173170]: 2025-12-05 10:00:50.431203904 +0000 UTC m=+0.076202347 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 05 10:00:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:50 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:50 compute-0 ceph-mon[74418]: pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:00:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:50.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:51 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4001e00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:51 compute-0 kernel: SELinux:  Converting 2776 SID table entries...
Dec 05 10:00:51 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 10:00:51 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 10:00:51 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 10:00:51 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 10:00:51 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 10:00:51 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 10:00:51 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 10:00:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:51 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:51.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:52 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:52.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:53 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:53 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b40091b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:00:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:53.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:00:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:00:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:54 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:54 compute-0 ceph-mon[74418]: pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:54.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:55 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:55 compute-0 ceph-mon[74418]: pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:00:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:55] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:00:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:00:55] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:00:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:55 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:55.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:56 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b40091b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:56.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:56.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:00:57 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 05 10:00:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:57 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:00:57 compute-0 podman[173202]: 2025-12-05 10:00:57.503473555 +0000 UTC m=+0.135424950 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:00:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:00:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:00:57 compute-0 ceph-mon[74418]: pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:00:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:00:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:00:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:00:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:00:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:00:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:00:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:57 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:57.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:00:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:58 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:58 compute-0 ceph-mon[74418]: pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:00:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 10:00:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:00:58.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 10:00:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:00:58.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:00:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=cleanup t=2025-12-05T10:00:59.202562221Z level=info msg="Completed cleanup jobs" duration=17.90176ms
Dec 05 10:00:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=sqlstore.transactions t=2025-12-05T10:00:59.208259897Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 05 10:00:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=plugins.update.checker t=2025-12-05T10:00:59.318607688Z level=info msg="Update check succeeded" duration=51.774268ms
Dec 05 10:00:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=grafana.update.checker t=2025-12-05T10:00:59.329933998Z level=info msg="Update check succeeded" duration=53.889676ms
Dec 05 10:00:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:59 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:00:59 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:00:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:00:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:00:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:00:59.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:01:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:00 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:00.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:01 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:01 compute-0 CROND[173234]: (root) CMD (run-parts /etc/cron.hourly)
Dec 05 10:01:01 compute-0 run-parts[173237]: (/etc/cron.hourly) starting 0anacron
Dec 05 10:01:01 compute-0 run-parts[173243]: (/etc/cron.hourly) finished 0anacron
Dec 05 10:01:01 compute-0 CROND[173233]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 05 10:01:01 compute-0 ceph-mon[74418]: pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:01:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:01 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:01.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:02 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:02 compute-0 ceph-mon[74418]: pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:02.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:03 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:03 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:03.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:01:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:04 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:04.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:05 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:05] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:01:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:05] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:01:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:05 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:05.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:06 compute-0 ceph-mon[74418]: pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:01:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:06 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:06.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:06.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:01:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:06.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:01:07 compute-0 ceph-mon[74418]: pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:07 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:07 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:07.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:08 compute-0 sudo[175318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:01:08 compute-0 sudo[175318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:08 compute-0 sudo[175318]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:08.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:08.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:01:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:09 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:09 compute-0 ceph-mon[74418]: pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:09 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:09.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:01:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:10 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:10.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:11 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:11 compute-0 ceph-mon[74418]: pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:01:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:11 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:11.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:12 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:01:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:01:12 compute-0 ceph-mon[74418]: pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:01:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:12.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:13 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:13 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:13.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:01:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:14 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:14.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:15 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:15 compute-0 ceph-mon[74418]: pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:01:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:15] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:01:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:15] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:01:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100115 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:01:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:15 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:15.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:16 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:16.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:17.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:01:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:17 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4001b80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:17 compute-0 ceph-mon[74418]: pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:17 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:17.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:18 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:18.922Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:01:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:18.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:18 compute-0 ceph-mon[74418]: pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:19 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:19 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4001b80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:19.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:01:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:20 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c001f70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:01:20.553 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:01:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:01:20.555 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:01:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:01:20.555 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:01:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:20.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:21 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:21 compute-0 ceph-mon[74418]: pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:01:21 compute-0 podman[183399]: 2025-12-05 10:01:21.408361156 +0000 UTC m=+0.072231348 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 10:01:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:21 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:21.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:01:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:22 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4001b80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:22.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:23 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c001f70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:23 compute-0 ceph-mon[74418]: pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:01:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:23 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:23.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:01:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:24 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:01:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:24 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:24.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:25 compute-0 ceph-mon[74418]: pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:01:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:25 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002c80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:25] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:01:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:25] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:01:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:25 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c001f70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:25.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:01:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:26 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:26.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:27.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:01:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:27 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:01:27
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['vms', '.nfs', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'images']
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:01:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:27 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:01:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:27 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:01:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:01:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:01:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:01:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:27 compute-0 sudo[187619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:01:27 compute-0 sudo[187619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:27 compute-0 sudo[187619]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:27 compute-0 sudo[187726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:01:27 compute-0 sudo[187726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:27 compute-0 ceph-mon[74418]: pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:01:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:27 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002c80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:27 compute-0 podman[187680]: 2025-12-05 10:01:27.986285752 +0000 UTC m=+0.163013223 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Dec 05 10:01:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 10:01:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:27.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 10:01:28 compute-0 sudo[187992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:01:28 compute-0 sudo[187992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:28 compute-0 sudo[187992]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:01:28 compute-0 sudo[187726]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:28 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 05 10:01:28 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 10:01:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:28.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:01:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:28.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:01:29 compute-0 ceph-mon[74418]: pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:01:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 10:01:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:29 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:29 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:29.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 937 B/s wr, 3 op/s
Dec 05 10:01:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:30 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4002c80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 10:01:30 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 10:01:30 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:30.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:30 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:01:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 05 10:01:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 10:01:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:31 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:31 compute-0 ceph-mon[74418]: pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 937 B/s wr, 3 op/s
Dec 05 10:01:31 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:31 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:31 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 10:01:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 10:01:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 10:01:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:31 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:31.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 937 B/s wr, 2 op/s
Dec 05 10:01:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 05 10:01:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 10:01:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:01:32 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:01:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:01:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:01:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 10:01:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:01:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:32 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:01:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:01:32 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:01:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:01:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:01:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:01:32 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:01:32 compute-0 sudo[190242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:01:32 compute-0 sudo[190242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:32 compute-0 sudo[190242]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:32 compute-0 sudo[190267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:01:32 compute-0 sudo[190267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:32 compute-0 ceph-mon[74418]: pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 937 B/s wr, 2 op/s
Dec 05 10:01:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 10:01:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:01:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:01:32 compute-0 ceph-mon[74418]: pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 10:01:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:01:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:01:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:01:32 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Dec 05 10:01:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:32.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:33 compute-0 podman[190331]: 2025-12-05 10:01:33.23908894 +0000 UTC m=+0.095034222 container create 497080c913664103843ad65cf5d335b682d9e1f1f0b3ece85b2f58fcf0ff54ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 10:01:33 compute-0 podman[190331]: 2025-12-05 10:01:33.176201759 +0000 UTC m=+0.032147031 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:01:33 compute-0 systemd[1]: Started libpod-conmon-497080c913664103843ad65cf5d335b682d9e1f1f0b3ece85b2f58fcf0ff54ec.scope.
Dec 05 10:01:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:01:33 compute-0 podman[190331]: 2025-12-05 10:01:33.353614236 +0000 UTC m=+0.209559568 container init 497080c913664103843ad65cf5d335b682d9e1f1f0b3ece85b2f58fcf0ff54ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 10:01:33 compute-0 podman[190331]: 2025-12-05 10:01:33.366584211 +0000 UTC m=+0.222529483 container start 497080c913664103843ad65cf5d335b682d9e1f1f0b3ece85b2f58fcf0ff54ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 10:01:33 compute-0 podman[190331]: 2025-12-05 10:01:33.371142486 +0000 UTC m=+0.227087828 container attach 497080c913664103843ad65cf5d335b682d9e1f1f0b3ece85b2f58fcf0ff54ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:01:33 compute-0 pensive_raman[190348]: 167 167
Dec 05 10:01:33 compute-0 systemd[1]: libpod-497080c913664103843ad65cf5d335b682d9e1f1f0b3ece85b2f58fcf0ff54ec.scope: Deactivated successfully.
Dec 05 10:01:33 compute-0 podman[190331]: 2025-12-05 10:01:33.376135953 +0000 UTC m=+0.232081235 container died 497080c913664103843ad65cf5d335b682d9e1f1f0b3ece85b2f58fcf0ff54ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3b1dc8936d1b28ac3c123229e1604e9c89cfd00455aa2f5c6cf97437c8ce871-merged.mount: Deactivated successfully.
Dec 05 10:01:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:33 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4004110 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:33 compute-0 podman[190331]: 2025-12-05 10:01:33.425988618 +0000 UTC m=+0.281933900 container remove 497080c913664103843ad65cf5d335b682d9e1f1f0b3ece85b2f58fcf0ff54ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 10:01:33 compute-0 systemd[1]: libpod-conmon-497080c913664103843ad65cf5d335b682d9e1f1f0b3ece85b2f58fcf0ff54ec.scope: Deactivated successfully.
Dec 05 10:01:33 compute-0 podman[190373]: 2025-12-05 10:01:33.61273809 +0000 UTC m=+0.052289222 container create d3274809982e0dc646f24cbd783ec93930781b2b80eb446f68b419c384ab5349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_rhodes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 10:01:33 compute-0 systemd[1]: Started libpod-conmon-d3274809982e0dc646f24cbd783ec93930781b2b80eb446f68b419c384ab5349.scope.
Dec 05 10:01:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808fc45a5dec7b7d9e2c7a28cc19a314b6f63b0b1e9e03c0f103efa3004262e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808fc45a5dec7b7d9e2c7a28cc19a314b6f63b0b1e9e03c0f103efa3004262e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808fc45a5dec7b7d9e2c7a28cc19a314b6f63b0b1e9e03c0f103efa3004262e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808fc45a5dec7b7d9e2c7a28cc19a314b6f63b0b1e9e03c0f103efa3004262e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808fc45a5dec7b7d9e2c7a28cc19a314b6f63b0b1e9e03c0f103efa3004262e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:33 compute-0 podman[190373]: 2025-12-05 10:01:33.593161434 +0000 UTC m=+0.032712596 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:01:33 compute-0 podman[190373]: 2025-12-05 10:01:33.695945568 +0000 UTC m=+0.135496740 container init d3274809982e0dc646f24cbd783ec93930781b2b80eb446f68b419c384ab5349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_rhodes, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:01:33 compute-0 ceph-mon[74418]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Dec 05 10:01:33 compute-0 podman[190373]: 2025-12-05 10:01:33.709373106 +0000 UTC m=+0.148924238 container start d3274809982e0dc646f24cbd783ec93930781b2b80eb446f68b419c384ab5349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_rhodes, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:01:33 compute-0 podman[190373]: 2025-12-05 10:01:33.712932143 +0000 UTC m=+0.152483305 container attach d3274809982e0dc646f24cbd783ec93930781b2b80eb446f68b419c384ab5349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:01:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:33 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:33.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:34 compute-0 peaceful_rhodes[190389]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:01:34 compute-0 peaceful_rhodes[190389]: --> All data devices are unavailable
Dec 05 10:01:34 compute-0 systemd[1]: libpod-d3274809982e0dc646f24cbd783ec93930781b2b80eb446f68b419c384ab5349.scope: Deactivated successfully.
Dec 05 10:01:34 compute-0 podman[190373]: 2025-12-05 10:01:34.083815227 +0000 UTC m=+0.523366349 container died d3274809982e0dc646f24cbd783ec93930781b2b80eb446f68b419c384ab5349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 10:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-808fc45a5dec7b7d9e2c7a28cc19a314b6f63b0b1e9e03c0f103efa3004262e4-merged.mount: Deactivated successfully.
Dec 05 10:01:34 compute-0 podman[190373]: 2025-12-05 10:01:34.141828065 +0000 UTC m=+0.581379227 container remove d3274809982e0dc646f24cbd783ec93930781b2b80eb446f68b419c384ab5349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_rhodes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:01:34 compute-0 systemd[1]: libpod-conmon-d3274809982e0dc646f24cbd783ec93930781b2b80eb446f68b419c384ab5349.scope: Deactivated successfully.
Dec 05 10:01:34 compute-0 sudo[190267]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:34 compute-0 sudo[190424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:01:34 compute-0 sudo[190424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:34 compute-0 sudo[190424]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:34 compute-0 sudo[190449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:01:34 compute-0 sudo[190449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 10:01:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:34 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:34 compute-0 ceph-mon[74418]: pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 10:01:34 compute-0 podman[190520]: 2025-12-05 10:01:34.724931109 +0000 UTC m=+0.055081859 container create 8899d43df5fd5804aad6def02903c481a68234105f8411f2b177635fae9238cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:01:34 compute-0 systemd[1]: Started libpod-conmon-8899d43df5fd5804aad6def02903c481a68234105f8411f2b177635fae9238cf.scope.
Dec 05 10:01:34 compute-0 podman[190520]: 2025-12-05 10:01:34.69686242 +0000 UTC m=+0.027013220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:01:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:01:34 compute-0 podman[190520]: 2025-12-05 10:01:34.841166761 +0000 UTC m=+0.171317611 container init 8899d43df5fd5804aad6def02903c481a68234105f8411f2b177635fae9238cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:01:34 compute-0 podman[190520]: 2025-12-05 10:01:34.852472761 +0000 UTC m=+0.182623521 container start 8899d43df5fd5804aad6def02903c481a68234105f8411f2b177635fae9238cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 10:01:34 compute-0 serene_wright[190540]: 167 167
Dec 05 10:01:34 compute-0 systemd[1]: libpod-8899d43df5fd5804aad6def02903c481a68234105f8411f2b177635fae9238cf.scope: Deactivated successfully.
Dec 05 10:01:34 compute-0 podman[190520]: 2025-12-05 10:01:34.860799989 +0000 UTC m=+0.190950739 container attach 8899d43df5fd5804aad6def02903c481a68234105f8411f2b177635fae9238cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:01:34 compute-0 conmon[190540]: conmon 8899d43df5fd5804aad6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8899d43df5fd5804aad6def02903c481a68234105f8411f2b177635fae9238cf.scope/container/memory.events
Dec 05 10:01:34 compute-0 podman[190520]: 2025-12-05 10:01:34.862481595 +0000 UTC m=+0.192632355 container died 8899d43df5fd5804aad6def02903c481a68234105f8411f2b177635fae9238cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 10:01:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:34.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8a22b6b036ba313f2f42a26d7e97d56cef457b8eaadbe87af77f018387c8141-merged.mount: Deactivated successfully.
Dec 05 10:01:34 compute-0 podman[190520]: 2025-12-05 10:01:34.977046751 +0000 UTC m=+0.307197501 container remove 8899d43df5fd5804aad6def02903c481a68234105f8411f2b177635fae9238cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 10:01:34 compute-0 systemd[1]: libpod-conmon-8899d43df5fd5804aad6def02903c481a68234105f8411f2b177635fae9238cf.scope: Deactivated successfully.
Dec 05 10:01:35 compute-0 podman[190567]: 2025-12-05 10:01:35.217995277 +0000 UTC m=+0.085286515 container create c67be80e38965991b1159c9d516fde1ebf81f89da083a4ea22e80ff091847438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 10:01:35 compute-0 systemd[1]: Started libpod-conmon-c67be80e38965991b1159c9d516fde1ebf81f89da083a4ea22e80ff091847438.scope.
Dec 05 10:01:35 compute-0 podman[190567]: 2025-12-05 10:01:35.188047318 +0000 UTC m=+0.055338566 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:01:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462b6effb0b8b6835d97e76daafda56b1ce1f628fca60ea7440d334f38842c3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462b6effb0b8b6835d97e76daafda56b1ce1f628fca60ea7440d334f38842c3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462b6effb0b8b6835d97e76daafda56b1ce1f628fca60ea7440d334f38842c3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462b6effb0b8b6835d97e76daafda56b1ce1f628fca60ea7440d334f38842c3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:35 compute-0 podman[190567]: 2025-12-05 10:01:35.319928419 +0000 UTC m=+0.187219747 container init c67be80e38965991b1159c9d516fde1ebf81f89da083a4ea22e80ff091847438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 10:01:35 compute-0 podman[190567]: 2025-12-05 10:01:35.327951808 +0000 UTC m=+0.195243046 container start c67be80e38965991b1159c9d516fde1ebf81f89da083a4ea22e80ff091847438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kapitsa, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:01:35 compute-0 podman[190567]: 2025-12-05 10:01:35.333133251 +0000 UTC m=+0.200424489 container attach c67be80e38965991b1159c9d516fde1ebf81f89da083a4ea22e80ff091847438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kapitsa, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:01:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:35 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]: {
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:     "1": [
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:         {
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             "devices": [
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "/dev/loop3"
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             ],
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             "lv_name": "ceph_lv0",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             "lv_size": "21470642176",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             "name": "ceph_lv0",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             "tags": {
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.cluster_name": "ceph",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.crush_device_class": "",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.encrypted": "0",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.osd_id": "1",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.type": "block",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.vdo": "0",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:                 "ceph.with_tpm": "0"
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             },
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             "type": "block",
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:             "vg_name": "ceph_vg0"
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:         }
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]:     ]
Dec 05 10:01:35 compute-0 sweet_kapitsa[190583]: }
Dec 05 10:01:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:35] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Dec 05 10:01:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:35] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Dec 05 10:01:35 compute-0 systemd[1]: libpod-c67be80e38965991b1159c9d516fde1ebf81f89da083a4ea22e80ff091847438.scope: Deactivated successfully.
Dec 05 10:01:35 compute-0 podman[190567]: 2025-12-05 10:01:35.673656363 +0000 UTC m=+0.540947631 container died c67be80e38965991b1159c9d516fde1ebf81f89da083a4ea22e80ff091847438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:01:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100135 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-462b6effb0b8b6835d97e76daafda56b1ce1f628fca60ea7440d334f38842c3b-merged.mount: Deactivated successfully.
Dec 05 10:01:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:35 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4004110 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:36.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:36 compute-0 podman[190567]: 2025-12-05 10:01:36.159477924 +0000 UTC m=+1.026769202 container remove c67be80e38965991b1159c9d516fde1ebf81f89da083a4ea22e80ff091847438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:01:36 compute-0 systemd[1]: libpod-conmon-c67be80e38965991b1159c9d516fde1ebf81f89da083a4ea22e80ff091847438.scope: Deactivated successfully.
Dec 05 10:01:36 compute-0 sudo[190449]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:36 compute-0 sudo[190608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:01:36 compute-0 sudo[190608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:36 compute-0 sudo[190608]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:36 compute-0 sudo[190633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:01:36 compute-0 sudo[190633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 10:01:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:36 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:36 compute-0 podman[190697]: 2025-12-05 10:01:36.72733926 +0000 UTC m=+0.041229169 container create 03b0478a2edb75e38ee3f91831b3e266b903c2d233fc7dc25c6df7d9ae060ac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:01:36 compute-0 systemd[1]: Started libpod-conmon-03b0478a2edb75e38ee3f91831b3e266b903c2d233fc7dc25c6df7d9ae060ac5.scope.
Dec 05 10:01:36 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:01:36 compute-0 podman[190697]: 2025-12-05 10:01:36.709984495 +0000 UTC m=+0.023874424 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:01:36 compute-0 podman[190697]: 2025-12-05 10:01:36.828850789 +0000 UTC m=+0.142740718 container init 03b0478a2edb75e38ee3f91831b3e266b903c2d233fc7dc25c6df7d9ae060ac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:01:36 compute-0 podman[190697]: 2025-12-05 10:01:36.835655165 +0000 UTC m=+0.149545074 container start 03b0478a2edb75e38ee3f91831b3e266b903c2d233fc7dc25c6df7d9ae060ac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:01:36 compute-0 systemd[1]: libpod-03b0478a2edb75e38ee3f91831b3e266b903c2d233fc7dc25c6df7d9ae060ac5.scope: Deactivated successfully.
Dec 05 10:01:36 compute-0 naughty_lamarr[190713]: 167 167
Dec 05 10:01:36 compute-0 conmon[190713]: conmon 03b0478a2edb75e38ee3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03b0478a2edb75e38ee3f91831b3e266b903c2d233fc7dc25c6df7d9ae060ac5.scope/container/memory.events
Dec 05 10:01:36 compute-0 podman[190697]: 2025-12-05 10:01:36.85481681 +0000 UTC m=+0.168706819 container attach 03b0478a2edb75e38ee3f91831b3e266b903c2d233fc7dc25c6df7d9ae060ac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 10:01:36 compute-0 podman[190697]: 2025-12-05 10:01:36.855469808 +0000 UTC m=+0.169359767 container died 03b0478a2edb75e38ee3f91831b3e266b903c2d233fc7dc25c6df7d9ae060ac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3c9fc30372241cb16d60632fb59f3d709edf1e52389cbcb5829e9b0af6159ab-merged.mount: Deactivated successfully.
Dec 05 10:01:36 compute-0 podman[190697]: 2025-12-05 10:01:36.903856912 +0000 UTC m=+0.217746821 container remove 03b0478a2edb75e38ee3f91831b3e266b903c2d233fc7dc25c6df7d9ae060ac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 10:01:36 compute-0 systemd[1]: libpod-conmon-03b0478a2edb75e38ee3f91831b3e266b903c2d233fc7dc25c6df7d9ae060ac5.scope: Deactivated successfully.
Dec 05 10:01:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:36.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:37.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:01:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:37.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:01:37 compute-0 podman[190739]: 2025-12-05 10:01:37.063435251 +0000 UTC m=+0.039972745 container create ffa75a4fb79427594fbcd46da3d0ad3cff94d88dc57abbb722b009be5c7fdfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wright, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:01:37 compute-0 systemd[1]: Started libpod-conmon-ffa75a4fb79427594fbcd46da3d0ad3cff94d88dc57abbb722b009be5c7fdfe5.scope.
Dec 05 10:01:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7446118febbaf051eed5a7beba8177e1c279761e2fc803b0a8764c6ee08fb0ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7446118febbaf051eed5a7beba8177e1c279761e2fc803b0a8764c6ee08fb0ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7446118febbaf051eed5a7beba8177e1c279761e2fc803b0a8764c6ee08fb0ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7446118febbaf051eed5a7beba8177e1c279761e2fc803b0a8764c6ee08fb0ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:01:37 compute-0 podman[190739]: 2025-12-05 10:01:37.137410747 +0000 UTC m=+0.113948261 container init ffa75a4fb79427594fbcd46da3d0ad3cff94d88dc57abbb722b009be5c7fdfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:01:37 compute-0 podman[190739]: 2025-12-05 10:01:37.046369744 +0000 UTC m=+0.022907248 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:01:37 compute-0 podman[190739]: 2025-12-05 10:01:37.144312905 +0000 UTC m=+0.120850399 container start ffa75a4fb79427594fbcd46da3d0ad3cff94d88dc57abbb722b009be5c7fdfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:01:37 compute-0 podman[190739]: 2025-12-05 10:01:37.148568282 +0000 UTC m=+0.125105806 container attach ffa75a4fb79427594fbcd46da3d0ad3cff94d88dc57abbb722b009be5c7fdfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 10:01:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:37 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:37 compute-0 ceph-mon[74418]: pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 10:01:37 compute-0 lvm[190831]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:01:37 compute-0 lvm[190831]: VG ceph_vg0 finished
Dec 05 10:01:37 compute-0 elated_wright[190756]: {}
Dec 05 10:01:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:37 compute-0 systemd[1]: libpod-ffa75a4fb79427594fbcd46da3d0ad3cff94d88dc57abbb722b009be5c7fdfe5.scope: Deactivated successfully.
Dec 05 10:01:37 compute-0 systemd[1]: libpod-ffa75a4fb79427594fbcd46da3d0ad3cff94d88dc57abbb722b009be5c7fdfe5.scope: Consumed 1.062s CPU time.
Dec 05 10:01:37 compute-0 podman[190739]: 2025-12-05 10:01:37.810603177 +0000 UTC m=+0.787140691 container died ffa75a4fb79427594fbcd46da3d0ad3cff94d88dc57abbb722b009be5c7fdfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7446118febbaf051eed5a7beba8177e1c279761e2fc803b0a8764c6ee08fb0ea-merged.mount: Deactivated successfully.
Dec 05 10:01:37 compute-0 podman[190739]: 2025-12-05 10:01:37.85453315 +0000 UTC m=+0.831070644 container remove ffa75a4fb79427594fbcd46da3d0ad3cff94d88dc57abbb722b009be5c7fdfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wright, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:01:37 compute-0 systemd[1]: libpod-conmon-ffa75a4fb79427594fbcd46da3d0ad3cff94d88dc57abbb722b009be5c7fdfe5.scope: Deactivated successfully.
Dec 05 10:01:37 compute-0 sudo[190633]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:01:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:01:37 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:37 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:38.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:38 compute-0 sudo[190847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:01:38 compute-0 sudo[190847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:38 compute-0 sudo[190847]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 10:01:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:38 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4004110 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:38.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:01:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:38.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:39 compute-0 ceph-mon[74418]: pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 05 10:01:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:39 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:39 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003c90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:40.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 303 B/s rd, 101 B/s wr, 0 op/s
Dec 05 10:01:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:40 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:40.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:41 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4004110 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:41 compute-0 ceph-mon[74418]: pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 303 B/s rd, 101 B/s wr, 0 op/s
Dec 05 10:01:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:41 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 10:01:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:42.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 10:01:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 303 B/s rd, 101 B/s wr, 0 op/s
Dec 05 10:01:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:42 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003cb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec 05 10:01:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:42 compute-0 ceph-mon[74418]: pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 303 B/s rd, 101 B/s wr, 0 op/s
Dec 05 10:01:42 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:01:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:01:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:42.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:43 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:01:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:01:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:43 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:44.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:01:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:44 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:44.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:45 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:45] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:01:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:45] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:01:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:45 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:46.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:46 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:46 compute-0 ceph-mon[74418]: pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:01:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:46.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:47.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:01:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:47.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:01:47 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Dec 05 10:01:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 10:01:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 10:01:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 10:01:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 10:01:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 10:01:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 10:01:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 10:01:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:47 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:47 compute-0 ceph-mon[74418]: pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:48 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003cf0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:48.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:48 compute-0 sudo[190895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:01:48 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 05 10:01:48 compute-0 sudo[190895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:01:48 compute-0 sudo[190895]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:48 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:48 compute-0 groupadd[190923]: group added to /etc/group: name=dnsmasq, GID=991
Dec 05 10:01:48 compute-0 groupadd[190923]: group added to /etc/gshadow: name=dnsmasq
Dec 05 10:01:48 compute-0 groupadd[190923]: new group: name=dnsmasq, GID=991
Dec 05 10:01:48 compute-0 useradd[190930]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 05 10:01:48 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Dec 05 10:01:48 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Dec 05 10:01:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:48.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:01:48 compute-0 ceph-mgr[74711]: [dashboard INFO request] [192.168.122.100:39882] [POST] [200] [0.006s] [4.0B] [26cbfc18-0d4f-4320-aada-8a63c0c9b1af] /api/prometheus_receiver
Dec 05 10:01:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:48.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:49 compute-0 ceph-mon[74418]: pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:50.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:50 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:50.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:51 compute-0 ceph-mon[74418]: pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:51 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:51 compute-0 groupadd[190945]: group added to /etc/group: name=clevis, GID=990
Dec 05 10:01:51 compute-0 groupadd[190945]: group added to /etc/gshadow: name=clevis
Dec 05 10:01:51 compute-0 groupadd[190945]: new group: name=clevis, GID=990
Dec 05 10:01:51 compute-0 useradd[190953]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 05 10:01:51 compute-0 podman[190949]: 2025-12-05 10:01:51.584222186 +0000 UTC m=+0.077182198 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 05 10:01:51 compute-0 usermod[190981]: add 'clevis' to group 'tss'
Dec 05 10:01:51 compute-0 usermod[190981]: add 'clevis' to shadow group 'tss'
Dec 05 10:01:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:51 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:52.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:52 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:52.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:53 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003d30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:53 compute-0 ceph-mon[74418]: pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:53 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0900019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:54.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:01:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:54 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00047e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:54 compute-0 polkitd[43445]: Reloading rules
Dec 05 10:01:54 compute-0 polkitd[43445]: Collecting garbage unconditionally...
Dec 05 10:01:54 compute-0 polkitd[43445]: Loading rules from directory /etc/polkit-1/rules.d
Dec 05 10:01:54 compute-0 polkitd[43445]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 05 10:01:54 compute-0 polkitd[43445]: Finished loading, compiling and executing 3 rules
Dec 05 10:01:54 compute-0 polkitd[43445]: Reloading rules
Dec 05 10:01:54 compute-0 polkitd[43445]: Collecting garbage unconditionally...
Dec 05 10:01:54 compute-0 polkitd[43445]: Loading rules from directory /etc/polkit-1/rules.d
Dec 05 10:01:54 compute-0 polkitd[43445]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 05 10:01:54 compute-0 polkitd[43445]: Finished loading, compiling and executing 3 rules
Dec 05 10:01:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:54.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:55 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:55] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:01:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:01:55] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:01:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:55 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003d50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:56.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:56 compute-0 ceph-mon[74418]: pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:01:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:56 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0900019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:01:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:56.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:01:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:01:57.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:01:57 compute-0 groupadd[191174]: group added to /etc/group: name=ceph, GID=167
Dec 05 10:01:57 compute-0 groupadd[191174]: group added to /etc/gshadow: name=ceph
Dec 05 10:01:57 compute-0 groupadd[191174]: new group: name=ceph, GID=167
Dec 05 10:01:57 compute-0 useradd[191180]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 05 10:01:57 compute-0 ceph-mon[74418]: pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:57 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:01:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:01:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:01:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:01:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:01:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:01:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:01:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:01:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:01:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:58 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:01:58.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:01:58 compute-0 podman[191188]: 2025-12-05 10:01:58.479721158 +0000 UTC m=+0.124661873 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:01:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:58 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:01:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:01:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:01:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:01:58.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:01:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:01:59 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0900019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:01:59 compute-0 ceph-mon[74418]: pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:00 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:00.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:00 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:00.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:01 compute-0 ceph-mon[74418]: pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:01 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 05 10:02:01 compute-0 sshd[1005]: Received signal 15; terminating.
Dec 05 10:02:01 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 05 10:02:01 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 05 10:02:01 compute-0 systemd[1]: sshd.service: Consumed 2.405s CPU time, read 32.0K from disk, written 0B to disk.
Dec 05 10:02:01 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 05 10:02:01 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 05 10:02:01 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 10:02:01 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 10:02:01 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 10:02:01 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 05 10:02:01 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 05 10:02:01 compute-0 sshd[191904]: Server listening on 0.0.0.0 port 22.
Dec 05 10:02:01 compute-0 sshd[191904]: Server listening on :: port 22.
Dec 05 10:02:01 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 05 10:02:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:01 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003d90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:02 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0900019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:02.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:02 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:02.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:03 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:03 compute-0 ceph-mon[74418]: pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:03 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 10:02:03 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 10:02:03 compute-0 systemd[1]: Reloading.
Dec 05 10:02:03 compute-0 systemd-sysv-generator[192164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:02:03 compute-0 systemd-rc-local-generator[192161]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:02:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:04 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:04.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:04 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 10:02:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:04 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:05.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:05 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:05] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 10:02:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:05] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 10:02:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:06 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:06.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:06 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:07.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:02:07.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:02:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:02:07.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:02:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:02:07.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:02:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:07 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:02:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:08.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:02:08 compute-0 sudo[196391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:02:08 compute-0 sudo[196391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:08 compute-0 sudo[196391]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:09.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:09 compute-0 ceph-mds[96460]: mds.beacon.cephfs.compute-0.hfgtsk missed beacon ack from the monitors
Dec 05 10:02:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:09 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:10 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004510 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:10.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:10 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:10 compute-0 ceph-mon[74418]: pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:10 compute-0 ceph-mon[74418]: pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:10 compute-0 ceph-mon[74418]: pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:11.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:11 compute-0 sudo[172137]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:11 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:11 compute-0 ceph-mon[74418]: pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:12 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084003dd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:12.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:12 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090004510 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:02:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:02:12 compute-0 ceph-mon[74418]: pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:02:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:13.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:13 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:14 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:14.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:14 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:15.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:15 compute-0 ceph-mon[74418]: pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:15 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:15 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 10:02:15 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 10:02:15 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.930s CPU time.
Dec 05 10:02:15 compute-0 systemd[1]: run-r4b7aec4b5152469bac1c062937fafa9a.service: Deactivated successfully.
Dec 05 10:02:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:15] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Dec 05 10:02:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:15] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Dec 05 10:02:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:16 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:16.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:16 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:02:17.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:02:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:17.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:17 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:18 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:18.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:18 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:19.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:19 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:20 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:20.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:20 compute-0 ceph-mon[74418]: pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:20 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:02:20.555 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:02:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:02:20.557 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:02:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:02:20.557 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:02:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:21.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:21 compute-0 ceph-mon[74418]: pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:21 compute-0 ceph-mon[74418]: pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:21 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:22 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:22.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:22 compute-0 podman[200673]: 2025-12-05 10:02:22.419071989 +0000 UTC m=+0.071101122 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 10:02:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:22 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:22 compute-0 sudo[200767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikycszcxgjjsypiacxyyrphnskvqudbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928941.9240239-968-170174555235828/AnsiballZ_systemd.py'
Dec 05 10:02:22 compute-0 sudo[200767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:22 compute-0 python3.9[200769]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 10:02:22 compute-0 systemd[1]: Reloading.
Dec 05 10:02:23 compute-0 systemd-rc-local-generator[200800]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:02:23 compute-0 systemd-sysv-generator[200804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:02:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:23.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:23 compute-0 sudo[200767]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:23 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:23 compute-0 ceph-mon[74418]: pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:23 compute-0 sudo[200957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvfobmxcccnqetqqmcwsrnfaoavxywij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928943.493217-968-229927996554246/AnsiballZ_systemd.py'
Dec 05 10:02:23 compute-0 sudo[200957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:24 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:24.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:24 compute-0 python3.9[200959]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 10:02:24 compute-0 systemd[1]: Reloading.
Dec 05 10:02:24 compute-0 systemd-sysv-generator[200990]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:02:24 compute-0 systemd-rc-local-generator[200985]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:02:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:24 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:24 compute-0 sudo[200957]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:24 compute-0 sudo[201149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsfmflalvzyluemjclvkkksajsbikjjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928944.644861-968-243951001772048/AnsiballZ_systemd.py'
Dec 05 10:02:24 compute-0 sudo[201149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:25.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:25 compute-0 python3.9[201151]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 10:02:25 compute-0 systemd[1]: Reloading.
Dec 05 10:02:25 compute-0 systemd-sysv-generator[201183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:02:25 compute-0 systemd-rc-local-generator[201180]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:02:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:25 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:25 compute-0 ceph-mon[74418]: pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:25 compute-0 sudo[201149]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:25] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Dec 05 10:02:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:25] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Dec 05 10:02:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:26 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b4001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:26 compute-0 sudo[201340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yedliuqjdjtzsvdmjfovdhzgxyaeokyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928945.727261-968-138911921813655/AnsiballZ_systemd.py'
Dec 05 10:02:26 compute-0 sudo[201340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:26.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:26 compute-0 python3.9[201342]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 10:02:26 compute-0 systemd[1]: Reloading.
Dec 05 10:02:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:26 compute-0 systemd-rc-local-generator[201373]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:02:26 compute-0 systemd-sysv-generator[201377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:02:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:26 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd080000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:26 compute-0 sudo[201340]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:02:27.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:02:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:27.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:27 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a40020b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:02:27
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['volumes', '.nfs', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', '.mgr']
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:02:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:02:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:02:27 compute-0 sudo[201532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neijrgaybwekrefpxswvcoenzmexdpms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928947.1666837-1055-42762710873942/AnsiballZ_systemd.py'
Dec 05 10:02:27 compute-0 sudo[201532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:02:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:02:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:28 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:28 compute-0 python3.9[201534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:28.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:28 compute-0 systemd[1]: Reloading.
Dec 05 10:02:28 compute-0 ceph-mon[74418]: pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:02:28 compute-0 systemd-rc-local-generator[201564]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:02:28 compute-0 systemd-sysv-generator[201567]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:02:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:28 compute-0 sudo[201532]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:28 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:28 compute-0 sudo[201588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:02:28 compute-0 sudo[201588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:28 compute-0 sudo[201588]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:28 compute-0 podman[201626]: 2025-12-05 10:02:28.728145418 +0000 UTC m=+0.124059245 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:02:28 compute-0 sudo[201774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hapzigdzzrvhxsfnxnubdrgfrqxosskr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928948.6266599-1055-153082270467538/AnsiballZ_systemd.py'
Dec 05 10:02:28 compute-0 sudo[201774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:29.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:29 compute-0 ceph-mon[74418]: pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:29 compute-0 python3.9[201776]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:29 compute-0 systemd[1]: Reloading.
Dec 05 10:02:29 compute-0 systemd-sysv-generator[201807]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:02:29 compute-0 systemd-rc-local-generator[201803]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:02:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:29 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0800016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:29 compute-0 sudo[201774]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:30 compute-0 sudo[201964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmubgiggnpabowwtdunywqvvxckhygtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928949.7375448-1055-235066078352335/AnsiballZ_systemd.py'
Dec 05 10:02:30 compute-0 sudo[201964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:30 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a40020b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:30.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:30 compute-0 python3.9[201966]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:30 compute-0 systemd[1]: Reloading.
Dec 05 10:02:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:30 compute-0 systemd-rc-local-generator[201998]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:02:30 compute-0 systemd-sysv-generator[202001]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:02:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:30 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:30 compute-0 sudo[201964]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:31.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:31 compute-0 sudo[202156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmfejvbhyzffhmflmuklofqrvjnfdeen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928950.896225-1055-55057476460425/AnsiballZ_systemd.py'
Dec 05 10:02:31 compute-0 sudo[202156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:31 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:31 compute-0 python3.9[202158]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:31 compute-0 sudo[202156]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:31 compute-0 ceph-mon[74418]: pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:32 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0800016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:32.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:32 compute-0 sudo[202312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zviworwyelzaugwtbdpmwpmacliumavl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928951.9620104-1055-1570970958503/AnsiballZ_systemd.py'
Dec 05 10:02:32 compute-0 sudo[202312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:32 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:32 compute-0 python3.9[202315]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:32 compute-0 systemd[1]: Reloading.
Dec 05 10:02:32 compute-0 ceph-mon[74418]: pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:32 compute-0 systemd-sysv-generator[202349]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:02:32 compute-0 systemd-rc-local-generator[202344]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:02:33 compute-0 sudo[202312]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:33 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:33 compute-0 sudo[202503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usajhvcezojhvmdekevvreonodrrqtbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928953.406295-1163-224119486162527/AnsiballZ_systemd.py'
Dec 05 10:02:33 compute-0 sudo[202503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:34 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:34 compute-0 python3.9[202505]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 10:02:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:34.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:34 compute-0 systemd[1]: Reloading.
Dec 05 10:02:34 compute-0 systemd-sysv-generator[202541]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:02:34 compute-0 systemd-rc-local-generator[202537]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:02:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:34 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 05 10:02:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:34 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 05 10:02:34 compute-0 sudo[202503]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:34 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0800016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:35.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:35 compute-0 sudo[202698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfczdrxlkfhzmzczvlwfhtonncudkpby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928954.956787-1187-73040327319051/AnsiballZ_systemd.py'
Dec 05 10:02:35 compute-0 sudo[202698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:35 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:35 compute-0 ceph-mon[74418]: pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:35 compute-0 python3.9[202700]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:35] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 10:02:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:35] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 10:02:35 compute-0 sudo[202698]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:36 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b0004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:36.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:36 compute-0 sudo[202855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gswxyfadsqubdvpsoldvynsrhiizhjug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928956.0226276-1187-273627152314738/AnsiballZ_systemd.py'
Dec 05 10:02:36 compute-0 sudo[202855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:36 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:36 compute-0 python3.9[202857]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:36 compute-0 sudo[202855]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:02:37.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:02:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:37.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:37 compute-0 sudo[203010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pikjmhtxpnriixyqhulggtlbluwvkwkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928956.9201975-1187-251684450348448/AnsiballZ_systemd.py'
Dec 05 10:02:37 compute-0 sudo[203010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:37 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:37 compute-0 python3.9[203012]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:37 compute-0 sudo[203010]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:37 compute-0 ceph-mon[74418]: pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:38 compute-0 sudo[203165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skhsxuwuwhtsvttfgqtnvuyoxqrkzmop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928957.7386057-1187-164389728762220/AnsiballZ_systemd.py'
Dec 05 10:02:38 compute-0 sudo[203165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:38 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:38.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:38 compute-0 sudo[203169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:02:38 compute-0 sudo[203169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:38 compute-0 python3.9[203167]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:38 compute-0 sudo[203169]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:38 compute-0 sudo[203195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:02:38 compute-0 sudo[203195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:38 compute-0 sudo[203165]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:38 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00049a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:38 compute-0 sudo[203390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhhimtzedmzrnhzmbajztmspqpllqkya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928958.5331216-1187-102400267831455/AnsiballZ_systemd.py'
Dec 05 10:02:38 compute-0 sudo[203390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:38 compute-0 sudo[203195]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:38 compute-0 ceph-mon[74418]: pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:02:39 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:02:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:02:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:02:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:02:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:02:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:02:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:02:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:02:39 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:02:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:02:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:02:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:02:39 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:02:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:39.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:39 compute-0 sudo[203405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:02:39 compute-0 sudo[203405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:39 compute-0 sudo[203405]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:39 compute-0 python3.9[203392]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:39 compute-0 sudo[203430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:02:39 compute-0 sudo[203430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:39 compute-0 sudo[203390]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.257924) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928959258038, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 3676, "num_deletes": 502, "total_data_size": 7710099, "memory_usage": 7823512, "flush_reason": "Manual Compaction"}
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928959328747, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 4313550, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13325, "largest_seqno": 17000, "table_properties": {"data_size": 4302449, "index_size": 6251, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3781, "raw_key_size": 29931, "raw_average_key_size": 20, "raw_value_size": 4276166, "raw_average_value_size": 2862, "num_data_blocks": 271, "num_entries": 1494, "num_filter_entries": 1494, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928553, "oldest_key_time": 1764928553, "file_creation_time": 1764928959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 70896 microseconds, and 11430 cpu microseconds.
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.328820) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 4313550 bytes OK
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.328855) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.331149) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.331193) EVENT_LOG_v1 {"time_micros": 1764928959331186, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.331213) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 7694893, prev total WAL file size 7694893, number of live WAL files 2.
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.333342) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(4212KB)], [32(12MB)]
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928959333489, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 17782598, "oldest_snapshot_seqno": -1}
Dec 05 10:02:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:39 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5006 keys, 13091035 bytes, temperature: kUnknown
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928959571908, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 13091035, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13055980, "index_size": 21441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126077, "raw_average_key_size": 25, "raw_value_size": 12963382, "raw_average_value_size": 2589, "num_data_blocks": 896, "num_entries": 5006, "num_filter_entries": 5006, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764928959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.572196) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 13091035 bytes
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.585465) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.6 rd, 54.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.1, 12.8 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(7.2) write-amplify(3.0) OK, records in: 5839, records dropped: 833 output_compression: NoCompression
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.585515) EVENT_LOG_v1 {"time_micros": 1764928959585492, "job": 14, "event": "compaction_finished", "compaction_time_micros": 238507, "compaction_time_cpu_micros": 42470, "output_level": 6, "num_output_files": 1, "total_output_size": 13091035, "num_input_records": 5839, "num_output_records": 5006, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928959586405, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928959588739, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.333185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.588788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.588792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.588793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.588795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:02:39 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:02:39.588796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:02:39 compute-0 podman[203597]: 2025-12-05 10:02:39.598557118 +0000 UTC m=+0.103287590 container create 0c7ecf320c6887d915c4e98c40bd3d344e8b1cccad7e84f1d8fa76effa381a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 10:02:39 compute-0 podman[203597]: 2025-12-05 10:02:39.52240512 +0000 UTC m=+0.027135622 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:02:39 compute-0 sudo[203661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcrzletfnlyyqskxohzamlusplazitkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928959.3493278-1187-135898867929032/AnsiballZ_systemd.py'
Dec 05 10:02:39 compute-0 sudo[203661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:39 compute-0 systemd[1]: Started libpod-conmon-0c7ecf320c6887d915c4e98c40bd3d344e8b1cccad7e84f1d8fa76effa381a3b.scope.
Dec 05 10:02:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:02:39 compute-0 podman[203597]: 2025-12-05 10:02:39.77342384 +0000 UTC m=+0.278154342 container init 0c7ecf320c6887d915c4e98c40bd3d344e8b1cccad7e84f1d8fa76effa381a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_jang, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:02:39 compute-0 podman[203597]: 2025-12-05 10:02:39.779859486 +0000 UTC m=+0.284589978 container start 0c7ecf320c6887d915c4e98c40bd3d344e8b1cccad7e84f1d8fa76effa381a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:02:39 compute-0 podman[203597]: 2025-12-05 10:02:39.783758342 +0000 UTC m=+0.288488834 container attach 0c7ecf320c6887d915c4e98c40bd3d344e8b1cccad7e84f1d8fa76effa381a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_jang, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 10:02:39 compute-0 frosty_jang[203666]: 167 167
Dec 05 10:02:39 compute-0 systemd[1]: libpod-0c7ecf320c6887d915c4e98c40bd3d344e8b1cccad7e84f1d8fa76effa381a3b.scope: Deactivated successfully.
Dec 05 10:02:39 compute-0 podman[203597]: 2025-12-05 10:02:39.78626845 +0000 UTC m=+0.290998942 container died 0c7ecf320c6887d915c4e98c40bd3d344e8b1cccad7e84f1d8fa76effa381a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:02:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ca31d2c09b3fd620e3597911fa0575e1837f0961a74ae215e6338e903aabf00-merged.mount: Deactivated successfully.
Dec 05 10:02:39 compute-0 python3.9[203663]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:40 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd080002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:40 compute-0 sudo[203661]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:40.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:02:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:02:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:02:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:02:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:02:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:02:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:02:40 compute-0 podman[203597]: 2025-12-05 10:02:40.24362843 +0000 UTC m=+0.748358912 container remove 0c7ecf320c6887d915c4e98c40bd3d344e8b1cccad7e84f1d8fa76effa381a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 05 10:02:40 compute-0 systemd[1]: libpod-conmon-0c7ecf320c6887d915c4e98c40bd3d344e8b1cccad7e84f1d8fa76effa381a3b.scope: Deactivated successfully.
Dec 05 10:02:40 compute-0 podman[203795]: 2025-12-05 10:02:40.398800275 +0000 UTC m=+0.042475660 container create 5ac4c29b861e55eb8d25dbb5132e8244a70d5a0e41d0129eee7ac0fb80908bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_montalcini, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 10:02:40 compute-0 systemd[1]: Started libpod-conmon-5ac4c29b861e55eb8d25dbb5132e8244a70d5a0e41d0129eee7ac0fb80908bd0.scope.
Dec 05 10:02:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b37e84fc33ced91dfe8e262d835c6b6e18b477855a4d1a6f2d635c2c660cd4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b37e84fc33ced91dfe8e262d835c6b6e18b477855a4d1a6f2d635c2c660cd4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b37e84fc33ced91dfe8e262d835c6b6e18b477855a4d1a6f2d635c2c660cd4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b37e84fc33ced91dfe8e262d835c6b6e18b477855a4d1a6f2d635c2c660cd4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b37e84fc33ced91dfe8e262d835c6b6e18b477855a4d1a6f2d635c2c660cd4c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:40 compute-0 podman[203795]: 2025-12-05 10:02:40.382011447 +0000 UTC m=+0.025686862 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:02:40 compute-0 podman[203795]: 2025-12-05 10:02:40.514359568 +0000 UTC m=+0.158034963 container init 5ac4c29b861e55eb8d25dbb5132e8244a70d5a0e41d0129eee7ac0fb80908bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:02:40 compute-0 sudo[203863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkumwpyxalvizeozhnoidktqeulwjvpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928960.2189353-1187-21998386927410/AnsiballZ_systemd.py'
Dec 05 10:02:40 compute-0 auditd[702]: Audit daemon rotating log files
Dec 05 10:02:40 compute-0 podman[203795]: 2025-12-05 10:02:40.523132268 +0000 UTC m=+0.166807653 container start 5ac4c29b861e55eb8d25dbb5132e8244a70d5a0e41d0129eee7ac0fb80908bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_montalcini, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 10:02:40 compute-0 sudo[203863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:40 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:40 compute-0 podman[203795]: 2025-12-05 10:02:40.532537885 +0000 UTC m=+0.176213270 container attach 5ac4c29b861e55eb8d25dbb5132e8244a70d5a0e41d0129eee7ac0fb80908bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 05 10:02:40 compute-0 python3.9[203867]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:40 compute-0 focused_montalcini[203828]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:02:40 compute-0 focused_montalcini[203828]: --> All data devices are unavailable
Dec 05 10:02:40 compute-0 podman[203795]: 2025-12-05 10:02:40.867366591 +0000 UTC m=+0.511041986 container died 5ac4c29b861e55eb8d25dbb5132e8244a70d5a0e41d0129eee7ac0fb80908bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:02:40 compute-0 systemd[1]: libpod-5ac4c29b861e55eb8d25dbb5132e8244a70d5a0e41d0129eee7ac0fb80908bd0.scope: Deactivated successfully.
Dec 05 10:02:40 compute-0 sudo[203863]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b37e84fc33ced91dfe8e262d835c6b6e18b477855a4d1a6f2d635c2c660cd4c-merged.mount: Deactivated successfully.
Dec 05 10:02:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:41.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:41 compute-0 podman[203795]: 2025-12-05 10:02:41.218690409 +0000 UTC m=+0.862365794 container remove 5ac4c29b861e55eb8d25dbb5132e8244a70d5a0e41d0129eee7ac0fb80908bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:02:41 compute-0 systemd[1]: libpod-conmon-5ac4c29b861e55eb8d25dbb5132e8244a70d5a0e41d0129eee7ac0fb80908bd0.scope: Deactivated successfully.
Dec 05 10:02:41 compute-0 sudo[203430]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:41 compute-0 sudo[204016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:02:41 compute-0 sudo[204016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:41 compute-0 sudo[204016]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:41 compute-0 sudo[204070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quptagrgjnhtfhzxubtprogxavwsaqcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928961.0754817-1187-73967130592640/AnsiballZ_systemd.py'
Dec 05 10:02:41 compute-0 sudo[204070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:41 compute-0 ceph-mon[74418]: pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:41 compute-0 sudo[204071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:02:41 compute-0 sudo[204071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:41 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00049c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:41 compute-0 python3.9[204079]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:41 compute-0 sudo[204070]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:41 compute-0 podman[204143]: 2025-12-05 10:02:41.777726693 +0000 UTC m=+0.050779807 container create d3465b70ff3b0adda98d71271417dc09d3e457e440b356177f19dfab3e3146fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:02:41 compute-0 systemd[1]: Started libpod-conmon-d3465b70ff3b0adda98d71271417dc09d3e457e440b356177f19dfab3e3146fb.scope.
Dec 05 10:02:41 compute-0 podman[204143]: 2025-12-05 10:02:41.750040457 +0000 UTC m=+0.023093601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:02:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:02:41 compute-0 podman[204143]: 2025-12-05 10:02:41.87766154 +0000 UTC m=+0.150714654 container init d3465b70ff3b0adda98d71271417dc09d3e457e440b356177f19dfab3e3146fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:02:41 compute-0 podman[204143]: 2025-12-05 10:02:41.887221901 +0000 UTC m=+0.160275015 container start d3465b70ff3b0adda98d71271417dc09d3e457e440b356177f19dfab3e3146fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:02:41 compute-0 loving_spence[204172]: 167 167
Dec 05 10:02:41 compute-0 systemd[1]: libpod-d3465b70ff3b0adda98d71271417dc09d3e457e440b356177f19dfab3e3146fb.scope: Deactivated successfully.
Dec 05 10:02:41 compute-0 conmon[204172]: conmon d3465b70ff3b0adda98d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3465b70ff3b0adda98d71271417dc09d3e457e440b356177f19dfab3e3146fb.scope/container/memory.events
Dec 05 10:02:41 compute-0 podman[204143]: 2025-12-05 10:02:41.90736504 +0000 UTC m=+0.180418154 container attach d3465b70ff3b0adda98d71271417dc09d3e457e440b356177f19dfab3e3146fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 05 10:02:41 compute-0 podman[204143]: 2025-12-05 10:02:41.907767051 +0000 UTC m=+0.180820185 container died d3465b70ff3b0adda98d71271417dc09d3e457e440b356177f19dfab3e3146fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e83d529a4d58f2664628fef35d00c9ba1eed6affe966899294bef3432f47d47-merged.mount: Deactivated successfully.
Dec 05 10:02:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:42 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:42.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:42 compute-0 podman[204143]: 2025-12-05 10:02:42.202800442 +0000 UTC m=+0.475853566 container remove d3465b70ff3b0adda98d71271417dc09d3e457e440b356177f19dfab3e3146fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:02:42 compute-0 sudo[204329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkkcivpneiuwvibtqlfrsghidraqmard ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928961.9156728-1187-236109458189214/AnsiballZ_systemd.py'
Dec 05 10:02:42 compute-0 sudo[204329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:42 compute-0 systemd[1]: libpod-conmon-d3465b70ff3b0adda98d71271417dc09d3e457e440b356177f19dfab3e3146fb.scope: Deactivated successfully.
Dec 05 10:02:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:42 compute-0 podman[204340]: 2025-12-05 10:02:42.391132991 +0000 UTC m=+0.024383436 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:02:42 compute-0 podman[204340]: 2025-12-05 10:02:42.517414227 +0000 UTC m=+0.150664642 container create 68cdfb2238d04b233f24670f0133393f3077ee4535ffed75857743d40f6ac359 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_robinson, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 10:02:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:42 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd080002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:42 compute-0 python3.9[204333]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:02:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:02:42 compute-0 systemd[1]: Started libpod-conmon-68cdfb2238d04b233f24670f0133393f3077ee4535ffed75857743d40f6ac359.scope.
Dec 05 10:02:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d79b440c1a0e077872fc9c380ca61fadb2864f6918449df70e20af139b1ca5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d79b440c1a0e077872fc9c380ca61fadb2864f6918449df70e20af139b1ca5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d79b440c1a0e077872fc9c380ca61fadb2864f6918449df70e20af139b1ca5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d79b440c1a0e077872fc9c380ca61fadb2864f6918449df70e20af139b1ca5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:42 compute-0 podman[204340]: 2025-12-05 10:02:42.62161399 +0000 UTC m=+0.254864415 container init 68cdfb2238d04b233f24670f0133393f3077ee4535ffed75857743d40f6ac359 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_robinson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:02:42 compute-0 podman[204340]: 2025-12-05 10:02:42.627877272 +0000 UTC m=+0.261127697 container start 68cdfb2238d04b233f24670f0133393f3077ee4535ffed75857743d40f6ac359 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 10:02:42 compute-0 podman[204340]: 2025-12-05 10:02:42.631438389 +0000 UTC m=+0.264688804 container attach 68cdfb2238d04b233f24670f0133393f3077ee4535ffed75857743d40f6ac359 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:02:42 compute-0 sudo[204329]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]: {
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:     "1": [
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:         {
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             "devices": [
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "/dev/loop3"
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             ],
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             "lv_name": "ceph_lv0",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             "lv_size": "21470642176",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             "name": "ceph_lv0",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             "tags": {
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.cluster_name": "ceph",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.crush_device_class": "",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.encrypted": "0",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.osd_id": "1",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.type": "block",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.vdo": "0",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:                 "ceph.with_tpm": "0"
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             },
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             "type": "block",
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:             "vg_name": "ceph_vg0"
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:         }
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]:     ]
Dec 05 10:02:42 compute-0 vibrant_robinson[204359]: }
Dec 05 10:02:42 compute-0 systemd[1]: libpod-68cdfb2238d04b233f24670f0133393f3077ee4535ffed75857743d40f6ac359.scope: Deactivated successfully.
Dec 05 10:02:42 compute-0 podman[204340]: 2025-12-05 10:02:42.937830359 +0000 UTC m=+0.571080774 container died 68cdfb2238d04b233f24670f0133393f3077ee4535ffed75857743d40f6ac359 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_robinson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-29d79b440c1a0e077872fc9c380ca61fadb2864f6918449df70e20af139b1ca5-merged.mount: Deactivated successfully.
Dec 05 10:02:42 compute-0 podman[204340]: 2025-12-05 10:02:42.988642776 +0000 UTC m=+0.621893181 container remove 68cdfb2238d04b233f24670f0133393f3077ee4535ffed75857743d40f6ac359 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec 05 10:02:43 compute-0 systemd[1]: libpod-conmon-68cdfb2238d04b233f24670f0133393f3077ee4535ffed75857743d40f6ac359.scope: Deactivated successfully.
Dec 05 10:02:43 compute-0 sudo[204071]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:43 compute-0 sudo[204530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uprzwtdyehbycqxojhcqudkzxiqmiogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928962.7762227-1187-240847667877303/AnsiballZ_systemd.py'
Dec 05 10:02:43 compute-0 sudo[204530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:43.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:43 compute-0 sudo[204532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:02:43 compute-0 sudo[204532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:43 compute-0 sudo[204532]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:43 compute-0 sudo[204558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:02:43 compute-0 sudo[204558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:43 compute-0 python3.9[204536]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:43 compute-0 sudo[204530]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:43 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:43 compute-0 ceph-mon[74418]: pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:02:43 compute-0 podman[204651]: 2025-12-05 10:02:43.570298258 +0000 UTC m=+0.046998123 container create ab25670edfa8a18e4e132039fb0bf8115cb439ef77fd27dda5f1a4a7710f22b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_villani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:02:43 compute-0 systemd[1]: Started libpod-conmon-ab25670edfa8a18e4e132039fb0bf8115cb439ef77fd27dda5f1a4a7710f22b6.scope.
Dec 05 10:02:43 compute-0 podman[204651]: 2025-12-05 10:02:43.550352194 +0000 UTC m=+0.027052069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:02:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:02:43 compute-0 podman[204651]: 2025-12-05 10:02:43.665464795 +0000 UTC m=+0.142164660 container init ab25670edfa8a18e4e132039fb0bf8115cb439ef77fd27dda5f1a4a7710f22b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:02:43 compute-0 podman[204651]: 2025-12-05 10:02:43.672507857 +0000 UTC m=+0.149207712 container start ab25670edfa8a18e4e132039fb0bf8115cb439ef77fd27dda5f1a4a7710f22b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_villani, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:02:43 compute-0 podman[204651]: 2025-12-05 10:02:43.675277122 +0000 UTC m=+0.151976997 container attach ab25670edfa8a18e4e132039fb0bf8115cb439ef77fd27dda5f1a4a7710f22b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 05 10:02:43 compute-0 romantic_villani[204720]: 167 167
Dec 05 10:02:43 compute-0 systemd[1]: libpod-ab25670edfa8a18e4e132039fb0bf8115cb439ef77fd27dda5f1a4a7710f22b6.scope: Deactivated successfully.
Dec 05 10:02:43 compute-0 podman[204651]: 2025-12-05 10:02:43.677483893 +0000 UTC m=+0.154183748 container died ab25670edfa8a18e4e132039fb0bf8115cb439ef77fd27dda5f1a4a7710f22b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:02:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-315265d76e4678ff4950f10c8bcc371ff4538089d1515108946f546bf621ba5f-merged.mount: Deactivated successfully.
Dec 05 10:02:43 compute-0 podman[204651]: 2025-12-05 10:02:43.708485269 +0000 UTC m=+0.185185124 container remove ab25670edfa8a18e4e132039fb0bf8115cb439ef77fd27dda5f1a4a7710f22b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 10:02:43 compute-0 systemd[1]: libpod-conmon-ab25670edfa8a18e4e132039fb0bf8115cb439ef77fd27dda5f1a4a7710f22b6.scope: Deactivated successfully.
Dec 05 10:02:43 compute-0 sudo[204813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmstpsibsakigshkbnqdadarrvkfghze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928963.5706441-1187-122717611595808/AnsiballZ_systemd.py'
Dec 05 10:02:43 compute-0 sudo[204813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:43 compute-0 podman[204816]: 2025-12-05 10:02:43.868710891 +0000 UTC m=+0.047805516 container create c91ef7419bd1d2110c89091433401891f61256166cbe9969c7ca6942e69b0493 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 10:02:43 compute-0 systemd[1]: Started libpod-conmon-c91ef7419bd1d2110c89091433401891f61256166cbe9969c7ca6942e69b0493.scope.
Dec 05 10:02:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e29161bd1c9c1c3f203baa535f80cf357314451515a215709e57b3d2394af5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:43 compute-0 podman[204816]: 2025-12-05 10:02:43.845499058 +0000 UTC m=+0.024593683 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e29161bd1c9c1c3f203baa535f80cf357314451515a215709e57b3d2394af5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e29161bd1c9c1c3f203baa535f80cf357314451515a215709e57b3d2394af5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e29161bd1c9c1c3f203baa535f80cf357314451515a215709e57b3d2394af5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:02:43 compute-0 podman[204816]: 2025-12-05 10:02:43.961760191 +0000 UTC m=+0.140854816 container init c91ef7419bd1d2110c89091433401891f61256166cbe9969c7ca6942e69b0493 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:02:43 compute-0 podman[204816]: 2025-12-05 10:02:43.973594853 +0000 UTC m=+0.152689458 container start c91ef7419bd1d2110c89091433401891f61256166cbe9969c7ca6942e69b0493 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 10:02:43 compute-0 podman[204816]: 2025-12-05 10:02:43.990282969 +0000 UTC m=+0.169377574 container attach c91ef7419bd1d2110c89091433401891f61256166cbe9969c7ca6942e69b0493 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 10:02:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:44 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00049e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:44.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:44 compute-0 python3.9[204824]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:44 compute-0 sudo[204813]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:44 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:44 compute-0 lvm[205036]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:02:44 compute-0 lvm[205036]: VG ceph_vg0 finished
Dec 05 10:02:44 compute-0 sudo[205064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhwlfenngkshwnhequobdxrhgawvyvmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928964.4103835-1187-37923653952533/AnsiballZ_systemd.py'
Dec 05 10:02:44 compute-0 sudo[205064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:44 compute-0 recursing_poitras[204834]: {}
Dec 05 10:02:44 compute-0 systemd[1]: libpod-c91ef7419bd1d2110c89091433401891f61256166cbe9969c7ca6942e69b0493.scope: Deactivated successfully.
Dec 05 10:02:44 compute-0 systemd[1]: libpod-c91ef7419bd1d2110c89091433401891f61256166cbe9969c7ca6942e69b0493.scope: Consumed 1.206s CPU time.
Dec 05 10:02:44 compute-0 podman[204816]: 2025-12-05 10:02:44.732932414 +0000 UTC m=+0.912027019 container died c91ef7419bd1d2110c89091433401891f61256166cbe9969c7ca6942e69b0493 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 10:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e29161bd1c9c1c3f203baa535f80cf357314451515a215709e57b3d2394af5e-merged.mount: Deactivated successfully.
Dec 05 10:02:44 compute-0 podman[204816]: 2025-12-05 10:02:44.812449483 +0000 UTC m=+0.991544098 container remove c91ef7419bd1d2110c89091433401891f61256166cbe9969c7ca6942e69b0493 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:02:44 compute-0 systemd[1]: libpod-conmon-c91ef7419bd1d2110c89091433401891f61256166cbe9969c7ca6942e69b0493.scope: Deactivated successfully.
Dec 05 10:02:44 compute-0 sudo[204558]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:02:44 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:02:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:02:44 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:02:44 compute-0 sudo[205083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:02:44 compute-0 sudo[205083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:44 compute-0 sudo[205083]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:45 compute-0 python3.9[205068]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:45.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:45 compute-0 sudo[205064]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:45 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd080003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:45 compute-0 sudo[205260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcnqlltiohyxtncsrahdjvbmczzwuvoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928965.245816-1187-226532138411958/AnsiballZ_systemd.py'
Dec 05 10:02:45 compute-0 sudo[205260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:45 compute-0 ceph-mon[74418]: pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:02:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:02:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:45] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:02:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:45] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:02:45 compute-0 python3.9[205262]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:45 compute-0 sudo[205260]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:46 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:46.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:46 compute-0 sudo[205416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpvwcqsguooxpmhxvbnnkbbxguuhtfxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928966.0275536-1187-228934126388877/AnsiballZ_systemd.py'
Dec 05 10:02:46 compute-0 sudo[205416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:46 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00049e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:46 compute-0 python3.9[205418]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 10:02:46 compute-0 sudo[205416]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:02:47.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:02:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:47.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:47 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:47 compute-0 ceph-mon[74418]: pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:48 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd080003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:48.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:48 compute-0 sudo[205573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbiibiggdladbtofisnrqacpmduamnpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928967.9309251-1493-142284083165439/AnsiballZ_file.py'
Dec 05 10:02:48 compute-0 sudo[205573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:48 compute-0 python3.9[205575]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:02:48 compute-0 sudo[205573]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:48 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:48 compute-0 sudo[205654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:02:48 compute-0 sudo[205654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:02:48 compute-0 sudo[205654]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:48 compute-0 sudo[205751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqjdqptkevudiyhveplipzvnkehnjdja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928968.543594-1493-105310797367383/AnsiballZ_file.py'
Dec 05 10:02:48 compute-0 sudo[205751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:48 compute-0 python3.9[205753]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:02:49 compute-0 sudo[205751]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:49.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:49 compute-0 sudo[205903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybwlzrqmdgpxuheclnwevhmrqsdbvttf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928969.1602206-1493-215086560149558/AnsiballZ_file.py'
Dec 05 10:02:49 compute-0 sudo[205903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:49 compute-0 ceph-mon[74418]: pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:49 compute-0 python3.9[205905]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:02:49 compute-0 sudo[205903]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:50 compute-0 sudo[206055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whuvwzystikktdrelgskngawpgdnbpzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928969.8004422-1493-29153401673954/AnsiballZ_file.py'
Dec 05 10:02:50 compute-0 sudo[206055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:50 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00049e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:50.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:50 compute-0 python3.9[206057]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:02:50 compute-0 sudo[206055]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:50 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00049e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:50 compute-0 sudo[206209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpcazwstpsaffuoffoyzypzxaajnmmfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928970.4294226-1493-174123099783509/AnsiballZ_file.py'
Dec 05 10:02:50 compute-0 sudo[206209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:50 compute-0 ceph-mon[74418]: pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:50 compute-0 python3.9[206211]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:02:50 compute-0 sudo[206209]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:51.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:51 compute-0 sudo[206361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjklqzdagkdxrilpnfsjihjlfniikelf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928970.9995449-1493-59617519926875/AnsiballZ_file.py'
Dec 05 10:02:51 compute-0 sudo[206361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:51 compute-0 python3.9[206363]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:02:51 compute-0 sudo[206361]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:51 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:52 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:52.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:52 compute-0 sudo[206514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkccitamrnngmqoxshezeganpocjwacq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928971.841714-1622-106255588057290/AnsiballZ_stat.py'
Dec 05 10:02:52 compute-0 sudo[206514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:52 compute-0 python3.9[206516]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:02:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:52 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00049e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:52 compute-0 sudo[206514]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:52 compute-0 sudo[206647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajggzyiigdjglbhqxbngcxuejqnqpzqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928971.841714-1622-106255588057290/AnsiballZ_copy.py'
Dec 05 10:02:52 compute-0 sudo[206647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:53 compute-0 podman[206614]: 2025-12-05 10:02:53.029969621 +0000 UTC m=+0.087374435 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:02:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:53.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:53 compute-0 python3.9[206653]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764928971.841714-1622-106255588057290/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:02:53 compute-0 sudo[206647]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:53 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd080003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:53 compute-0 ceph-mon[74418]: pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:53 compute-0 sudo[206811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kodcnrgcsyzarnwjhaidityfkllhygak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928973.3568232-1622-197919585089861/AnsiballZ_stat.py'
Dec 05 10:02:53 compute-0 sudo[206811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:53 compute-0 python3.9[206813]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:02:53 compute-0 sudo[206811]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:54 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:54.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:54 compute-0 sudo[206937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvuoqxygoubrvrfdhoplksisacngdtxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928973.3568232-1622-197919585089861/AnsiballZ_copy.py'
Dec 05 10:02:54 compute-0 sudo[206937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:54 compute-0 python3.9[206939]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764928973.3568232-1622-197919585089861/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:02:54 compute-0 sudo[206937]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:54 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:54 compute-0 sudo[207090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwhiywlzlpqpfkpdtqjzonlrfvfddomh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928974.5738068-1622-48983439222131/AnsiballZ_stat.py'
Dec 05 10:02:54 compute-0 sudo[207090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:54 compute-0 python3.9[207092]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:02:55 compute-0 sudo[207090]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:55.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:55 compute-0 sudo[207215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bytojelrxwcmcepwfzhydyvirdbuczsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928974.5738068-1622-48983439222131/AnsiballZ_copy.py'
Dec 05 10:02:55 compute-0 sudo[207215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:55 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b00049e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:55 compute-0 python3.9[207217]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764928974.5738068-1622-48983439222131/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:02:55 compute-0 ceph-mon[74418]: pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:02:55 compute-0 sudo[207215]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:55] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:02:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:02:55] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:02:55 compute-0 sudo[207367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iluwojwxzmdffhxvvtazxaglpbqsckzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928975.7058613-1622-122225169980175/AnsiballZ_stat.py'
Dec 05 10:02:55 compute-0 sudo[207367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:56 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd080003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:56.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:56 compute-0 python3.9[207369]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:02:56 compute-0 sudo[207367]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:56 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd080003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:56 compute-0 sudo[207494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jefxeknccvcleyhnirdzwgpnyreoyhfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928975.7058613-1622-122225169980175/AnsiballZ_copy.py'
Dec 05 10:02:56 compute-0 sudo[207494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:56 compute-0 python3.9[207498]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764928975.7058613-1622-122225169980175/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:02:56 compute-0 sudo[207494]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:02:57.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:02:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:57.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:57 compute-0 sudo[207648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqaoffizpggovhrpcxuasjenjzpmaxja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928976.9146042-1622-173853772716973/AnsiballZ_stat.py'
Dec 05 10:02:57 compute-0 sudo[207648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:57 compute-0 python3.9[207650]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:02:57 compute-0 sudo[207648]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:57 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:02:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:02:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:02:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:02:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:02:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:02:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:02:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:02:57 compute-0 sudo[207773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctpcjbfkqzbifyrhjmbcosxmlqtmtxvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928976.9146042-1622-173853772716973/AnsiballZ_copy.py'
Dec 05 10:02:57 compute-0 sudo[207773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:57 compute-0 python3.9[207775]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764928976.9146042-1622-173853772716973/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:02:58 compute-0 sudo[207773]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:58 compute-0 ceph-mon[74418]: pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:58 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:02:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:02:58.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:02:58 compute-0 sudo[207927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnumvbtzoigrsyxtoruvsyecximbzazx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928978.1372447-1622-10388220600182/AnsiballZ_stat.py'
Dec 05 10:02:58 compute-0 sudo[207927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:58 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:58 compute-0 python3.9[207929]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:02:58 compute-0 sudo[207927]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:58 compute-0 sudo[208068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psgaogabhbqfhwuwxtlnlrwclglneylw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928978.1372447-1622-10388220600182/AnsiballZ_copy.py'
Dec 05 10:02:58 compute-0 sudo[208068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:59 compute-0 podman[208026]: 2025-12-05 10:02:59.004122132 +0000 UTC m=+0.107717870 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:02:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:02:59 compute-0 ceph-mon[74418]: pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:02:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:02:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:02:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:02:59.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:02:59 compute-0 python3.9[208076]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764928978.1372447-1622-10388220600182/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:02:59 compute-0 sudo[208068]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:02:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:02:59 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:02:59 compute-0 sudo[208231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spndmxajlyhhfjinsjdgfrmgeubcetxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928979.3498201-1622-68340342222767/AnsiballZ_stat.py'
Dec 05 10:02:59 compute-0 sudo[208231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:02:59 compute-0 python3.9[208233]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:02:59 compute-0 sudo[208231]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100259 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:03:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:00 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:00.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:00 compute-0 sudo[208355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukftimmprzfwdfpznvhtjahvjytksips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928979.3498201-1622-68340342222767/AnsiballZ_copy.py'
Dec 05 10:03:00 compute-0 sudo[208355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:00 compute-0 python3.9[208357]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764928979.3498201-1622-68340342222767/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:00 compute-0 sudo[208355]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:03:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:00 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:00 compute-0 ceph-mon[74418]: pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:03:00 compute-0 sudo[208508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxhuoxldjebgnuhzjclddgjcpbbnlrqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928980.5001616-1622-109865760579620/AnsiballZ_stat.py'
Dec 05 10:03:00 compute-0 sudo[208508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:01 compute-0 python3.9[208510]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:01 compute-0 sudo[208508]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:01.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:01 compute-0 sudo[208633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ancdpirysphgmywkvmdnrcqkehhftzqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928980.5001616-1622-109865760579620/AnsiballZ_copy.py'
Dec 05 10:03:01 compute-0 sudo[208633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:01 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:01 compute-0 python3.9[208635]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764928980.5001616-1622-109865760579620/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:01 compute-0 sudo[208633]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:02 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:02.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:02 compute-0 sudo[208787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcihcakazfkcfmqqwmbhyjneciwdigle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928982.1529844-1961-135110050923486/AnsiballZ_command.py'
Dec 05 10:03:02 compute-0 sudo[208787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:03:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:02 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:02 compute-0 python3.9[208789]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 05 10:03:02 compute-0 sudo[208787]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:03.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:03 compute-0 sudo[208940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhldpauqzhfhsrlejyknsygwylnqfnoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928982.8916373-1988-230725232072175/AnsiballZ_file.py'
Dec 05 10:03:03 compute-0 sudo[208940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:03 compute-0 python3.9[208942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:03 compute-0 sudo[208940]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:03 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:03 compute-0 sudo[209092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irvtpcisuchvycajagmmymrjntoktsol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928983.5612314-1988-71795426203575/AnsiballZ_file.py'
Dec 05 10:03:03 compute-0 sudo[209092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:04 compute-0 python3.9[209094]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:04 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:04 compute-0 sudo[209092]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:04.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:03:04 compute-0 sudo[209246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stlgxpihcfyatuxwnjjaxtrzllttttth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928984.2556496-1988-118178777717619/AnsiballZ_file.py'
Dec 05 10:03:04 compute-0 sudo[209246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:04 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:04 compute-0 python3.9[209248]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:04 compute-0 sudo[209246]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:05.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:05 compute-0 sudo[209398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quluuxjpntcaypnnnvlrfbdutrmzdcmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928984.9356668-1988-44246767724815/AnsiballZ_file.py'
Dec 05 10:03:05 compute-0 sudo[209398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:05 compute-0 ceph-mon[74418]: pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:03:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:05 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:05 compute-0 python3.9[209400]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:05 compute-0 sudo[209398]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:05] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:03:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:05] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:03:05 compute-0 sudo[209550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbsdocyaohegcdepjgmsqbsabhyjshkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928985.673253-1988-140123766394809/AnsiballZ_file.py'
Dec 05 10:03:05 compute-0 sudo[209550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:06 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:06.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:06 compute-0 python3.9[209552]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:06 compute-0 sudo[209550]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:06 compute-0 ceph-mon[74418]: pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:03:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:03:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:06 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:06 compute-0 sudo[209704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksamdtdgmiaxerydmxqxpnuiiffsyhpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928986.3351798-1988-167498226454506/AnsiballZ_file.py'
Dec 05 10:03:06 compute-0 sudo[209704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:06 compute-0 python3.9[209706]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:06 compute-0 sudo[209704]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:03:07.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:03:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:03:07.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:03:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:07.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:07 compute-0 sudo[209856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzegrcriprbdcnfwecfmmgcksrgvvjxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928986.9816976-1988-56378588794775/AnsiballZ_file.py'
Dec 05 10:03:07 compute-0 sudo[209856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:07 compute-0 ceph-mon[74418]: pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:03:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:07 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:07 compute-0 python3.9[209858]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:07 compute-0 sudo[209856]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:07 compute-0 sudo[210008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfsctyilejrnmvfyyzxuxipnaevwndlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928987.6849446-1988-60128058592460/AnsiballZ_file.py'
Dec 05 10:03:07 compute-0 sudo[210008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:08.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:08 compute-0 python3.9[210010]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:08 compute-0 sudo[210008]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:03:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:08 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:08 compute-0 sudo[210162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-staxpvkfrmilvlowiydnvlhwmqhgmnpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928988.2915578-1988-55640236870832/AnsiballZ_file.py'
Dec 05 10:03:08 compute-0 sudo[210162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:08 compute-0 sudo[210165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:03:08 compute-0 sudo[210165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:08 compute-0 sudo[210165]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:08 compute-0 python3.9[210164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:08 compute-0 ceph-mon[74418]: pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:03:08 compute-0 sudo[210162]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:09.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:09 compute-0 sudo[210339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fahnvyekmpwwyprxougbmzescrilnaje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928988.9213758-1988-28953759101882/AnsiballZ_file.py'
Dec 05 10:03:09 compute-0 sudo[210339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:09 compute-0 python3.9[210341]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:09 compute-0 sudo[210339]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:09 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:03:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:09 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:09 compute-0 sudo[210491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaogqwooogleinxztlptyjdoiumysryo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928989.5051339-1988-211600979803212/AnsiballZ_file.py'
Dec 05 10:03:09 compute-0 sudo[210491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:09 compute-0 python3.9[210493]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:09 compute-0 sudo[210491]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:10 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:10.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:10 compute-0 sudo[210644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxxomfqacwzmawehknhvdsqhdcdexiuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928990.0974782-1988-34054990506550/AnsiballZ_file.py'
Dec 05 10:03:10 compute-0 sudo[210644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:10 compute-0 python3.9[210647]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:10 compute-0 sudo[210644]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:10 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:10 compute-0 sudo[210797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvvilypussasdnjrzgrieokqfdlqkcdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928990.691873-1988-118706327164912/AnsiballZ_file.py'
Dec 05 10:03:10 compute-0 sudo[210797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:11.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:11 compute-0 python3.9[210799]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:11 compute-0 sudo[210797]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:11 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:11 compute-0 sudo[210949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odumdvgqqiqeboqpiduyzauznbdjdlci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928991.2942142-1988-256307059745604/AnsiballZ_file.py'
Dec 05 10:03:11 compute-0 sudo[210949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:11 compute-0 ceph-mon[74418]: pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:11 compute-0 python3.9[210951]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:11 compute-0 sudo[210949]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:12 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:12.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:12 compute-0 sudo[211102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovwqgxeactwqknknabqinxqtjcyzkgjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928992.035154-2285-11261876252038/AnsiballZ_stat.py'
Dec 05 10:03:12 compute-0 sudo[211102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:12 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:03:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:12 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:03:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:12 compute-0 python3.9[211104]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:12 compute-0 sudo[211102]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:03:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:03:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:12 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:12 compute-0 ceph-mon[74418]: pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:03:13 compute-0 sudo[211226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyvigscluqawlmgfatfaxmytqyrykfje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928992.035154-2285-11261876252038/AnsiballZ_copy.py'
Dec 05 10:03:13 compute-0 sudo[211226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:13.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:13 compute-0 python3.9[211228]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928992.035154-2285-11261876252038/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:13 compute-0 sudo[211226]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:13 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:13 compute-0 sudo[211378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrutwwesqdohhuyssutwhzuqlynschhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928993.4078958-2285-204660036855958/AnsiballZ_stat.py'
Dec 05 10:03:13 compute-0 sudo[211378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:13 compute-0 python3.9[211380]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:13 compute-0 sudo[211378]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:14 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:14.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:14 compute-0 sudo[211502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnivhwyxexfzwpuqjgprcmrgvaytfgzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928993.4078958-2285-204660036855958/AnsiballZ_copy.py'
Dec 05 10:03:14 compute-0 sudo[211502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:14 compute-0 python3.9[211504]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928993.4078958-2285-204660036855958/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:14 compute-0 sudo[211502]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:03:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:14 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:14 compute-0 sudo[211656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jycvnpdrjbsgcntjjthzwppxemjxchay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928994.5101287-2285-148677068441957/AnsiballZ_stat.py'
Dec 05 10:03:14 compute-0 sudo[211656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:14 compute-0 python3.9[211658]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:14 compute-0 sudo[211656]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:15.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:15 compute-0 sudo[211779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbnlkxftagozwzmirqpjardttqdseqhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928994.5101287-2285-148677068441957/AnsiballZ_copy.py'
Dec 05 10:03:15 compute-0 sudo[211779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:15 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:15 compute-0 python3.9[211781]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928994.5101287-2285-148677068441957/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:15 compute-0 sudo[211779]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:15 compute-0 ceph-mon[74418]: pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:03:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:15] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:03:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:15] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:03:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:15 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:03:15 compute-0 sudo[211931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uejlmfcpsbpajbylzjyrvgzbsdmuwkbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928995.671895-2285-127886541271608/AnsiballZ_stat.py'
Dec 05 10:03:15 compute-0 sudo[211931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:16 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:16 compute-0 python3.9[211933]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:16.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:16 compute-0 sudo[211931]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:16 compute-0 sudo[212056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkiwbhuxblhdxunfmvwcgxaoqunqgoyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928995.671895-2285-127886541271608/AnsiballZ_copy.py'
Dec 05 10:03:16 compute-0 sudo[212056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:03:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:16 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:16 compute-0 python3.9[212058]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928995.671895-2285-127886541271608/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:16 compute-0 sudo[212056]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:16 compute-0 ceph-mon[74418]: pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:03:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:03:17.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:03:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:03:17.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:03:17 compute-0 sudo[212208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkwdfeaxouehiigyvezzvgfffvopnvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928996.8121808-2285-106491738539832/AnsiballZ_stat.py'
Dec 05 10:03:17 compute-0 sudo[212208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:17.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:17 compute-0 python3.9[212210]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:17 compute-0 sudo[212208]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:17 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:17 compute-0 sudo[212331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmzxyynonbcmbieikbfjnwgyxkunuvco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928996.8121808-2285-106491738539832/AnsiballZ_copy.py'
Dec 05 10:03:17 compute-0 sudo[212331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:17 compute-0 python3.9[212333]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928996.8121808-2285-106491738539832/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:17 compute-0 sudo[212331]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:18 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:18.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:18 compute-0 sudo[212484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmtbjiljlkorxrhfxqilyneexdtyghjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928997.9946449-2285-250307360494527/AnsiballZ_stat.py'
Dec 05 10:03:18 compute-0 sudo[212484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:18 compute-0 python3.9[212486]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:18 compute-0 sudo[212484]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:03:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:18 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:18 compute-0 sudo[212608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwvovyqiduifowwcjfztvktwvzbbidme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928997.9946449-2285-250307360494527/AnsiballZ_copy.py'
Dec 05 10:03:18 compute-0 sudo[212608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:18 compute-0 python3.9[212610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928997.9946449-2285-250307360494527/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:19 compute-0 sudo[212608]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:19.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:19 compute-0 sudo[212760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moropjhqxnyeakwlsrkiwtaxliyhpsyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928999.133917-2285-210679586473384/AnsiballZ_stat.py'
Dec 05 10:03:19 compute-0 sudo[212760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:19 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:19 compute-0 python3.9[212762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:19 compute-0 sudo[212760]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:19 compute-0 ceph-mon[74418]: pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:03:19 compute-0 sudo[212883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqwggeytzkovcabrxjygyjbdpgulrldo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928999.133917-2285-210679586473384/AnsiballZ_copy.py'
Dec 05 10:03:19 compute-0 sudo[212883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:20 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:20 compute-0 python3.9[212885]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764928999.133917-2285-210679586473384/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:20 compute-0 sudo[212883]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:20.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100320 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:03:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:03:20 compute-0 sudo[213037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hngajzvrmskbtnwqzgwukivazppvhkry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929000.2418783-2285-52030128114079/AnsiballZ_stat.py'
Dec 05 10:03:20 compute-0 sudo[213037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:20 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:03:20.557 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:03:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:03:20.557 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:03:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:03:20.558 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:03:20 compute-0 python3.9[213039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:20 compute-0 sudo[213037]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:21 compute-0 sudo[213160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pabllmqwlawrnkaezyfjnmimvxdchskc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929000.2418783-2285-52030128114079/AnsiballZ_copy.py'
Dec 05 10:03:21 compute-0 sudo[213160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:21 compute-0 ceph-mon[74418]: pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:03:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:21.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:21 compute-0 python3.9[213162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764929000.2418783-2285-52030128114079/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:21 compute-0 sudo[213160]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:21 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:21 compute-0 sudo[213312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xednyionkeenfcjzbueawkasjoqslrib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929001.3521729-2285-165084545431498/AnsiballZ_stat.py'
Dec 05 10:03:21 compute-0 sudo[213312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:21 compute-0 python3.9[213314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:21 compute-0 sudo[213312]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100321 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:03:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:22 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:22 compute-0 sudo[213435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irgwmvnczurjyeuocjqujmsqiirnsbyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929001.3521729-2285-165084545431498/AnsiballZ_copy.py'
Dec 05 10:03:22 compute-0 sudo[213435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:22.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:22 compute-0 python3.9[213437]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764929001.3521729-2285-165084545431498/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:22 compute-0 sudo[213435]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:03:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:22 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:22 compute-0 sudo[213589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jailjzjsuwxtveyqdpsguhjfllyyzvki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929002.430443-2285-258961264679880/AnsiballZ_stat.py'
Dec 05 10:03:22 compute-0 sudo[213589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:22 compute-0 python3.9[213591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:22 compute-0 sudo[213589]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:23.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:23 compute-0 sudo[213723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exzioehqmayeuwzvgjfbbfitsblvknlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929002.430443-2285-258961264679880/AnsiballZ_copy.py'
Dec 05 10:03:23 compute-0 sudo[213723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:23 compute-0 podman[213686]: 2025-12-05 10:03:23.314384724 +0000 UTC m=+0.091204009 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:03:23 compute-0 python3.9[213729]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764929002.430443-2285-258961264679880/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:23 compute-0 sudo[213723]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:23 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:23 compute-0 sudo[213884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raeqvxxkyddktaykibgdnwzasnfxsiet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929003.599229-2285-205529479761579/AnsiballZ_stat.py'
Dec 05 10:03:23 compute-0 sudo[213884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:23 compute-0 ceph-mon[74418]: pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:03:24 compute-0 python3.9[213886]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:24 compute-0 sudo[213884]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:24 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:24.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:24 compute-0 sudo[214009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xagfrfeicdrjjjjfxxoafvlqjgmvbixm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929003.599229-2285-205529479761579/AnsiballZ_copy.py'
Dec 05 10:03:24 compute-0 sudo[214009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:03:24 compute-0 python3.9[214011]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764929003.599229-2285-205529479761579/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:24 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:24 compute-0 sudo[214009]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:24 compute-0 sudo[214161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inklabsisxybcplfedzxaizgmrcgdygx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929004.6938086-2285-87621417857753/AnsiballZ_stat.py'
Dec 05 10:03:24 compute-0 sudo[214161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:25 compute-0 ceph-mon[74418]: pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:03:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:25.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:25 compute-0 python3.9[214163]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:25 compute-0 sudo[214161]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:25 compute-0 sudo[214284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-excjjeparkpdbhkodbljtceemrypsoip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929004.6938086-2285-87621417857753/AnsiballZ_copy.py'
Dec 05 10:03:25 compute-0 sudo[214284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:25 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:25] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:03:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:25] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:03:25 compute-0 python3.9[214286]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764929004.6938086-2285-87621417857753/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:25 compute-0 sudo[214284]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:03:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.7 total, 600.0 interval
                                           Cumulative writes: 3745 writes, 17K keys, 3745 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 3745 writes, 3745 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1390 writes, 5804 keys, 1390 commit groups, 1.0 writes per commit group, ingest: 11.14 MB, 0.02 MB/s
                                           Interval WAL: 1390 writes, 1390 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     29.0      0.88              0.17         7    0.126       0      0       0.0       0.0
                                             L6      1/0   12.48 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   2.9     68.0     58.0      1.30              0.35         6    0.216     28K   3183       0.0       0.0
                                            Sum      1/0   12.48 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.9     40.5     46.3      2.18              0.52        13    0.167     28K   3183       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.1     65.7     63.8      0.72              0.18         6    0.120     15K   1819       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     68.0     58.0      1.30              0.35         6    0.216     28K   3183       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     61.1      0.42              0.17         6    0.069       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.46              0.00         1    0.463       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.7 total, 600.0 interval
                                           Flush(GB): cumulative 0.025, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.08 MB/s write, 0.09 GB read, 0.07 MB/s read, 2.2 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585d4f19350#2 capacity: 304.00 MB usage: 2.26 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(136,2.00 MB,0.65929%) FilterBlock(14,90.11 KB,0.0289465%) IndexBlock(14,174.42 KB,0.0560309%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 05 10:03:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:26 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:26.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:26 compute-0 sudo[214437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olstlzmgmeueggxkqrleihxhasujsfjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929005.870312-2285-104853007114155/AnsiballZ_stat.py'
Dec 05 10:03:26 compute-0 sudo[214437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:26 compute-0 python3.9[214439]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:26 compute-0 sudo[214437]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:26 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:26 compute-0 sudo[214561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cisijxiphmnjhivzeynlsaylaogzonnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929005.870312-2285-104853007114155/AnsiballZ_copy.py'
Dec 05 10:03:26 compute-0 sudo[214561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:26 compute-0 python3.9[214563]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764929005.870312-2285-104853007114155/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:26 compute-0 sudo[214561]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:03:27.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:03:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:03:27.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:03:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:27.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:27 compute-0 sudo[214713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spkmlcpxsswkimfhlrybyowktchrgoyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929007.019232-2285-91829992202118/AnsiballZ_stat.py'
Dec 05 10:03:27 compute-0 sudo[214713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:27 compute-0 python3.9[214715]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:27 compute-0 sudo[214713]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:03:27
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images', '.nfs', 'cephfs.cephfs.data', 'volumes', 'vms', '.mgr']
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:03:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:27 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:03:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:03:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:03:27 compute-0 sudo[214836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niqojpfvfvhyukserjzljkblkpxabsbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929007.019232-2285-91829992202118/AnsiballZ_copy.py'
Dec 05 10:03:27 compute-0 sudo[214836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:27 compute-0 ceph-mon[74418]: pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:03:27 compute-0 python3.9[214838]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764929007.019232-2285-91829992202118/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:27 compute-0 sudo[214836]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:28 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:28.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:28 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:28 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:03:28 compute-0 sudo[214865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:03:28 compute-0 sudo[214865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:28 compute-0 sudo[214865]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:28 compute-0 ceph-mon[74418]: pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:03:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:29.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:03:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:29 compute-0 podman[214890]: 2025-12-05 10:03:29.414090722 +0000 UTC m=+0.081746853 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:03:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:29 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:29 compute-0 python3.9[215041]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:03:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:30 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:30.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:03:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:30 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:30 compute-0 sudo[215196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evdswqoflmkncxhnkktojlkbxtmicalx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929010.323281-2903-133867676928215/AnsiballZ_seboolean.py'
Dec 05 10:03:30 compute-0 sudo[215196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:30 compute-0 python3.9[215198]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 05 10:03:30 compute-0 ceph-mon[74418]: pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:03:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:31.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:31 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:31 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:03:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:31 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:03:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:32 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:32.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Dec 05 10:03:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:32 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:32 compute-0 sudo[215196]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:32 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:03:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:33.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:33 compute-0 ceph-mon[74418]: pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Dec 05 10:03:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:33 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:34 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:34.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:34 compute-0 sudo[215355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyqqgasommvohmaqpglapjhgfdibtxwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929013.9409232-2927-276446463401201/AnsiballZ_copy.py'
Dec 05 10:03:34 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 05 10:03:34 compute-0 sudo[215355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:34 compute-0 python3.9[215357]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:34 compute-0 sudo[215355]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:03:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:34 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:34 compute-0 ceph-mon[74418]: pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:03:34 compute-0 sudo[215508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imnvequacycdtykfrtyviqozulayuhow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929014.582054-2927-124853275761120/AnsiballZ_copy.py'
Dec 05 10:03:34 compute-0 sudo[215508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:35 compute-0 python3.9[215510]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:35 compute-0 sudo[215508]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:35.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:35 compute-0 sudo[215660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doicuwgivbztyoqcjdqrpxmrjfycotie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929015.2533028-2927-279696766879878/AnsiballZ_copy.py'
Dec 05 10:03:35 compute-0 sudo[215660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:35 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:35] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:03:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:35] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:03:35 compute-0 python3.9[215662]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:35 compute-0 sudo[215660]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:36 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:36 compute-0 sudo[215813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niqmgcxxpditmidilwlzelovyqudzbpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929015.8861253-2927-197758447977757/AnsiballZ_copy.py'
Dec 05 10:03:36 compute-0 sudo[215813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:36.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:36 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:03:36 compute-0 python3.9[215815]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:36 compute-0 sudo[215813]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:03:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:36 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:36 compute-0 sudo[215966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cintyrswhynwgikiiyktkrtlxdmfnaqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929016.4621007-2927-119453945259507/AnsiballZ_copy.py'
Dec 05 10:03:36 compute-0 sudo[215966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:36 compute-0 python3.9[215968]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:36 compute-0 sudo[215966]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:36 compute-0 ceph-mon[74418]: pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:03:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:03:37.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:03:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:37.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:37 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:37 compute-0 sudo[216118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abwmxsfoppqolfvljcjhqbpuwotcnzon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929017.2551355-3035-165184949746177/AnsiballZ_copy.py'
Dec 05 10:03:37 compute-0 sudo[216118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:37 compute-0 python3.9[216120]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:37 compute-0 sudo[216118]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:38 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:38 compute-0 sudo[216271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxwpuxcrwsrrxlrazgtpknsfhtogkpnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929017.8696625-3035-118956710821024/AnsiballZ_copy.py'
Dec 05 10:03:38 compute-0 sudo[216271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:38.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:38 compute-0 python3.9[216273]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:38 compute-0 sudo[216271]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:03:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:38 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:38 compute-0 sudo[216424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiadqltrcgecsjkswykfsnulhxieabkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929018.521794-3035-148714354949474/AnsiballZ_copy.py'
Dec 05 10:03:38 compute-0 sudo[216424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:38 compute-0 python3.9[216426]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:38 compute-0 sudo[216424]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:39 compute-0 ceph-mon[74418]: pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:03:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:39.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:39 compute-0 sudo[216576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twsyoqyeduzhsrwbmohgdwqrdbppxiov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929019.1088352-3035-87868141512336/AnsiballZ_copy.py'
Dec 05 10:03:39 compute-0 sudo[216576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:39 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:39 compute-0 python3.9[216578]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:39 compute-0 sudo[216576]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:40 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:40.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:40 compute-0 sudo[216729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phdqfdjxeaozvixyipmlnkkhoiaiwuvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929019.9221153-3035-144239980842182/AnsiballZ_copy.py'
Dec 05 10:03:40 compute-0 sudo[216729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:40 compute-0 python3.9[216731]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:40 compute-0 sudo[216729]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 05 10:03:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:40 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:40 compute-0 ceph-mon[74418]: pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 05 10:03:40 compute-0 sudo[216882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loswgzztfphdjofridzdmlxagdgcazav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929020.5744734-3143-252720483015250/AnsiballZ_systemd.py'
Dec 05 10:03:40 compute-0 sudo[216882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:41 compute-0 python3.9[216884]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 10:03:41 compute-0 systemd[1]: Reloading.
Dec 05 10:03:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:41.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:41 compute-0 systemd-sysv-generator[216912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:03:41 compute-0 systemd-rc-local-generator[216909]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:03:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:41 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:41 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 05 10:03:41 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 05 10:03:41 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 05 10:03:41 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 05 10:03:41 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 05 10:03:41 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 05 10:03:42 compute-0 sudo[216882]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:42 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:42.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100342 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:03:42 compute-0 sudo[217078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggiyidicrawnlxsmxnagdeaxyvzlfweg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929022.164336-3143-48976554277104/AnsiballZ_systemd.py'
Dec 05 10:03:42 compute-0 sudo[217078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:03:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:03:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:03:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:42 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:03:42 compute-0 python3.9[217080]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 10:03:42 compute-0 systemd[1]: Reloading.
Dec 05 10:03:42 compute-0 systemd-sysv-generator[217108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:03:42 compute-0 systemd-rc-local-generator[217101]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:03:43 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 05 10:03:43 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 05 10:03:43 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 05 10:03:43 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 05 10:03:43 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 05 10:03:43 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 05 10:03:43 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 05 10:03:43 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 05 10:03:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:43.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:43 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 05 10:03:43 compute-0 sudo[217078]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:43 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 05 10:03:43 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 05 10:03:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:43 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:43 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 05 10:03:43 compute-0 sudo[217302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqrekgnycoxuywkdpmhkwgryfuktyhkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929023.411956-3143-170834773254881/AnsiballZ_systemd.py'
Dec 05 10:03:43 compute-0 sudo[217302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:43 compute-0 ceph-mon[74418]: pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:03:43 compute-0 python3.9[217304]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 10:03:43 compute-0 systemd[1]: Reloading.
Dec 05 10:03:44 compute-0 systemd-sysv-generator[217338]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:03:44 compute-0 systemd-rc-local-generator[217335]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:03:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:44 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:44.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:44 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 05 10:03:44 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 05 10:03:44 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 05 10:03:44 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 05 10:03:44 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 05 10:03:44 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 05 10:03:44 compute-0 sudo[217302]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:03:44 compute-0 setroubleshoot[217117]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ab6aa288-568e-452d-b03d-1ae5f190a449
Dec 05 10:03:44 compute-0 setroubleshoot[217117]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 05 10:03:44 compute-0 setroubleshoot[217117]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ab6aa288-568e-452d-b03d-1ae5f190a449
Dec 05 10:03:44 compute-0 setroubleshoot[217117]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 05 10:03:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:44 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd090002fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:44 compute-0 sudo[217518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwatwjiajlhohcqyqxntbouxzgblmnst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929024.5583582-3143-6628471633071/AnsiballZ_systemd.py'
Dec 05 10:03:44 compute-0 sudo[217518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:44 compute-0 ceph-mon[74418]: pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:03:45 compute-0 python3.9[217520]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 10:03:45 compute-0 systemd[1]: Reloading.
Dec 05 10:03:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:45.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:45 compute-0 systemd-sysv-generator[217572]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:03:45 compute-0 systemd-rc-local-generator[217567]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:03:45 compute-0 sudo[217522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:03:45 compute-0 sudo[217522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:45 compute-0 sudo[217522]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:45 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 05 10:03:45 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 05 10:03:45 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 05 10:03:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:45 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:45 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 05 10:03:45 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 05 10:03:45 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 05 10:03:45 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 05 10:03:45 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 05 10:03:45 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 05 10:03:45 compute-0 sudo[217582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 10:03:45 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 05 10:03:45 compute-0 sudo[217582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:45 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 05 10:03:45 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 05 10:03:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:45] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 10:03:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:45] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 10:03:45 compute-0 sudo[217518]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:46 compute-0 sudo[217853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byjpzuaprcxmlufdhjwmqecchfzysqgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929025.8177638-3143-72040182575797/AnsiballZ_systemd.py'
Dec 05 10:03:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:46 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:46 compute-0 sudo[217853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:46.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:46 compute-0 podman[217858]: 2025-12-05 10:03:46.217082947 +0000 UTC m=+0.097478131 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 05 10:03:46 compute-0 podman[217883]: 2025-12-05 10:03:46.409489686 +0000 UTC m=+0.072872219 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:03:46 compute-0 python3.9[217863]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 10:03:46 compute-0 systemd[1]: Reloading.
Dec 05 10:03:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:46 compute-0 systemd-rc-local-generator[217923]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:03:46 compute-0 systemd-sysv-generator[217926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:03:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:46 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:46 compute-0 podman[217858]: 2025-12-05 10:03:46.749384361 +0000 UTC m=+0.629779595 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:46.779275) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929026779410, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 777, "num_deletes": 251, "total_data_size": 1302236, "memory_usage": 1317280, "flush_reason": "Manual Compaction"}
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929026798005, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1283594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17001, "largest_seqno": 17777, "table_properties": {"data_size": 1279554, "index_size": 1820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8779, "raw_average_key_size": 19, "raw_value_size": 1271535, "raw_average_value_size": 2800, "num_data_blocks": 81, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928960, "oldest_key_time": 1764928960, "file_creation_time": 1764929026, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 18888 microseconds, and 6259 cpu microseconds.
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:46.798127) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1283594 bytes OK
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:46.798197) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:46.799883) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:46.800419) EVENT_LOG_v1 {"time_micros": 1764929026800409, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:46.800449) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1298394, prev total WAL file size 1299092, number of live WAL files 2.
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:46.801485) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1253KB)], [35(12MB)]
Dec 05 10:03:46 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929026801790, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 14374629, "oldest_snapshot_seqno": -1}
Dec 05 10:03:46 compute-0 ceph-mon[74418]: pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:03:47.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:03:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:03:47.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:03:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:03:47.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:03:47 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 05 10:03:47 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 05 10:03:47 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 05 10:03:47 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 05 10:03:47 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 05 10:03:47 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 05 10:03:47 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 05 10:03:47 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 05 10:03:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:47 compute-0 sudo[217853]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:47.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4946 keys, 12090457 bytes, temperature: kUnknown
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929027267490, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12090457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12056469, "index_size": 20502, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 125445, "raw_average_key_size": 25, "raw_value_size": 11965572, "raw_average_value_size": 2419, "num_data_blocks": 853, "num_entries": 4946, "num_filter_entries": 4946, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764929026, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:47.268860) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12090457 bytes
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:47.274149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 30.8 rd, 25.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 12.5 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(20.6) write-amplify(9.4) OK, records in: 5460, records dropped: 514 output_compression: NoCompression
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:47.274202) EVENT_LOG_v1 {"time_micros": 1764929027274168, "job": 16, "event": "compaction_finished", "compaction_time_micros": 466733, "compaction_time_cpu_micros": 31978, "output_level": 6, "num_output_files": 1, "total_output_size": 12090457, "num_input_records": 5460, "num_output_records": 4946, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929027274854, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929027277587, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:46.801300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:47.277640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:47.277644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:47.277646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:47.277647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:03:47 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:03:47.277649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:03:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:47 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:47 compute-0 sudo[218206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqktnsvympohtxiidezezdptztdnbvom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929027.46288-3254-191972809312426/AnsiballZ_file.py'
Dec 05 10:03:47 compute-0 sudo[218206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:47 compute-0 podman[218090]: 2025-12-05 10:03:47.834916533 +0000 UTC m=+0.364824746 container exec 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:03:47 compute-0 podman[218090]: 2025-12-05 10:03:47.886668646 +0000 UTC m=+0.416576839 container exec_died 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:03:47 compute-0 python3.9[218208]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:47 compute-0 sudo[218206]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:48 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:48.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:48 compute-0 sudo[218452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slzqwdfynxbimfhqsmneppqpxzbqwgzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929028.1841266-3278-116246897866943/AnsiballZ_find.py'
Dec 05 10:03:48 compute-0 sudo[218452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:48 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:48 compute-0 python3.9[218454]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 10:03:48 compute-0 sudo[218452]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:48 compute-0 sudo[218479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:03:48 compute-0 sudo[218479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:48 compute-0 sudo[218479]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:49.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:49 compute-0 sudo[218629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmssgxsgapjymeoevbnabsqvjihdkkuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929028.9590902-3302-271970851826360/AnsiballZ_command.py'
Dec 05 10:03:49 compute-0 sudo[218629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:49 compute-0 podman[218313]: 2025-12-05 10:03:49.268618216 +0000 UTC m=+1.113025053 container exec 8ab60eb67dd7aac53c686233e020897e2dfda89edd71f5c454cc0418d6c97a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:03:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:49 compute-0 python3.9[218631]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:03:49 compute-0 sudo[218629]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:49 compute-0 podman[218313]: 2025-12-05 10:03:49.52967789 +0000 UTC m=+1.374084697 container exec_died 8ab60eb67dd7aac53c686233e020897e2dfda89edd71f5c454cc0418d6c97a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:03:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:49 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:49 compute-0 ceph-mon[74418]: pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:50 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:50 compute-0 podman[218795]: 2025-12-05 10:03:50.160114033 +0000 UTC m=+0.176052326 container exec d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 10:03:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:50.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:50 compute-0 podman[218795]: 2025-12-05 10:03:50.184664363 +0000 UTC m=+0.200602636 container exec_died d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 10:03:50 compute-0 python3.9[218847]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 10:03:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:50 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08400bc80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:50 compute-0 podman[218925]: 2025-12-05 10:03:50.581830821 +0000 UTC m=+0.066763673 container exec f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., release=1793, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph)
Dec 05 10:03:50 compute-0 podman[218925]: 2025-12-05 10:03:50.641716764 +0000 UTC m=+0.126649616 container exec_died f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, com.redhat.component=keepalived-container, version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:03:50 compute-0 podman[219045]: 2025-12-05 10:03:50.867468145 +0000 UTC m=+0.055652800 container exec a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:03:50 compute-0 ceph-mon[74418]: pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:03:50 compute-0 podman[219045]: 2025-12-05 10:03:50.938687728 +0000 UTC m=+0.126872333 container exec_died a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:03:51 compute-0 python3.9[219154]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:51 compute-0 podman[219189]: 2025-12-05 10:03:51.166304519 +0000 UTC m=+0.066817303 container exec 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 10:03:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:51.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:51 compute-0 podman[219189]: 2025-12-05 10:03:51.428604387 +0000 UTC m=+0.329117161 container exec_died 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 10:03:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:51 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:51 compute-0 python3.9[219350]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764929030.7041132-3359-230626752688261/.source.xml follow=False _original_basename=secret.xml.j2 checksum=229c52d4619874d34015c842efe3483bc76f13d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:51 compute-0 podman[219434]: 2025-12-05 10:03:51.821975551 +0000 UTC m=+0.054919800 container exec 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:03:51 compute-0 podman[219434]: 2025-12-05 10:03:51.888882367 +0000 UTC m=+0.121826696 container exec_died 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:03:51 compute-0 sudo[217582]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:03:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:03:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:52 compute-0 sudo[219516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:03:52 compute-0 sudo[219516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:52 compute-0 sudo[219516]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:52 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:52 compute-0 sudo[219568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:03:52 compute-0 sudo[219568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:52.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:52 compute-0 sudo[219666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yisngkchtbrpmzllhmjywajrnznukgpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929032.0276136-3404-211475481668545/AnsiballZ_command.py'
Dec 05 10:03:52 compute-0 sudo[219666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:03:52 compute-0 python3.9[219668]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 3c63ce0f-5206-59ae-8381-b67d0b6424b5
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:03:52 compute-0 polkitd[43445]: Registered Authentication Agent for unix-process:219686:380260 (system bus name :1.2848 [pkttyagent --process 219686 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 05 10:03:52 compute-0 polkitd[43445]: Unregistered Authentication Agent for unix-process:219686:380260 (system bus name :1.2848, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 05 10:03:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:52 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:52 compute-0 polkitd[43445]: Registered Authentication Agent for unix-process:219685:380259 (system bus name :1.2849 [pkttyagent --process 219685 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 05 10:03:52 compute-0 polkitd[43445]: Unregistered Authentication Agent for unix-process:219685:380259 (system bus name :1.2849, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 05 10:03:52 compute-0 sudo[219666]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:52 compute-0 sudo[219568]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:03:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:03:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:03:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:03:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:03:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:03:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:03:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:03:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:03:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:03:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:03:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:03:52 compute-0 sudo[219735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:03:52 compute-0 sudo[219735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:52 compute-0 sudo[219735]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:52 compute-0 sudo[219760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:03:52 compute-0 sudo[219760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:52 compute-0 ceph-mon[74418]: pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:03:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:03:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:03:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:03:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:03:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:03:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:53.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:53 compute-0 podman[219953]: 2025-12-05 10:03:53.456669727 +0000 UTC m=+0.058281692 container create a0ed6ae3aa3f4706382f54c7f0e1ef7cb414caaa33ca8e1329805e5f79dbbe3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 10:03:53 compute-0 systemd[1]: Started libpod-conmon-a0ed6ae3aa3f4706382f54c7f0e1ef7cb414caaa33ca8e1329805e5f79dbbe3c.scope.
Dec 05 10:03:53 compute-0 podman[219953]: 2025-12-05 10:03:53.429403082 +0000 UTC m=+0.031015067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:03:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:03:53 compute-0 podman[219953]: 2025-12-05 10:03:53.547584468 +0000 UTC m=+0.149196453 container init a0ed6ae3aa3f4706382f54c7f0e1ef7cb414caaa33ca8e1329805e5f79dbbe3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 10:03:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:53 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08400bc80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:53 compute-0 podman[219953]: 2025-12-05 10:03:53.555203296 +0000 UTC m=+0.156815241 container start a0ed6ae3aa3f4706382f54c7f0e1ef7cb414caaa33ca8e1329805e5f79dbbe3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:03:53 compute-0 podman[219953]: 2025-12-05 10:03:53.560698656 +0000 UTC m=+0.162310601 container attach a0ed6ae3aa3f4706382f54c7f0e1ef7cb414caaa33ca8e1329805e5f79dbbe3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:03:53 compute-0 distracted_hermann[219971]: 167 167
Dec 05 10:03:53 compute-0 systemd[1]: libpod-a0ed6ae3aa3f4706382f54c7f0e1ef7cb414caaa33ca8e1329805e5f79dbbe3c.scope: Deactivated successfully.
Dec 05 10:03:53 compute-0 podman[219953]: 2025-12-05 10:03:53.564583563 +0000 UTC m=+0.166195508 container died a0ed6ae3aa3f4706382f54c7f0e1ef7cb414caaa33ca8e1329805e5f79dbbe3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 10:03:53 compute-0 podman[219968]: 2025-12-05 10:03:53.584275531 +0000 UTC m=+0.075342948 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ea198195ca3c7e3f0670f8b33ac980495fe8a7646aca50884ce658cd9f311e3-merged.mount: Deactivated successfully.
Dec 05 10:03:53 compute-0 python3.9[219948]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:53 compute-0 podman[219953]: 2025-12-05 10:03:53.608801379 +0000 UTC m=+0.210413324 container remove a0ed6ae3aa3f4706382f54c7f0e1ef7cb414caaa33ca8e1329805e5f79dbbe3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:03:53 compute-0 systemd[1]: libpod-conmon-a0ed6ae3aa3f4706382f54c7f0e1ef7cb414caaa33ca8e1329805e5f79dbbe3c.scope: Deactivated successfully.
Dec 05 10:03:53 compute-0 podman[220040]: 2025-12-05 10:03:53.777773651 +0000 UTC m=+0.044852055 container create c517284d17b4942e3f7cdd50e0d70bd62a606465a464ee4f078c790990854f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 05 10:03:53 compute-0 systemd[1]: Started libpod-conmon-c517284d17b4942e3f7cdd50e0d70bd62a606465a464ee4f078c790990854f5d.scope.
Dec 05 10:03:53 compute-0 podman[220040]: 2025-12-05 10:03:53.757518408 +0000 UTC m=+0.024596822 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:03:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5411dc451b19cbe639a75336e6616c4b261a67f70ba4bf70afc91198caa51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5411dc451b19cbe639a75336e6616c4b261a67f70ba4bf70afc91198caa51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5411dc451b19cbe639a75336e6616c4b261a67f70ba4bf70afc91198caa51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5411dc451b19cbe639a75336e6616c4b261a67f70ba4bf70afc91198caa51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5411dc451b19cbe639a75336e6616c4b261a67f70ba4bf70afc91198caa51/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:53 compute-0 podman[220040]: 2025-12-05 10:03:53.872207587 +0000 UTC m=+0.139286001 container init c517284d17b4942e3f7cdd50e0d70bd62a606465a464ee4f078c790990854f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:03:53 compute-0 podman[220040]: 2025-12-05 10:03:53.882012825 +0000 UTC m=+0.149091219 container start c517284d17b4942e3f7cdd50e0d70bd62a606465a464ee4f078c790990854f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:03:53 compute-0 podman[220040]: 2025-12-05 10:03:53.885328296 +0000 UTC m=+0.152406680 container attach c517284d17b4942e3f7cdd50e0d70bd62a606465a464ee4f078c790990854f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:03:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:54 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:54.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:54 compute-0 sudo[220195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oybbosqvuehcyfjxdvzqsexbcawyqnyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929033.8482165-3452-199440123997813/AnsiballZ_command.py'
Dec 05 10:03:54 compute-0 sudo[220195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:54 compute-0 beautiful_hofstadter[220076]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:03:54 compute-0 beautiful_hofstadter[220076]: --> All data devices are unavailable
Dec 05 10:03:54 compute-0 systemd[1]: libpod-c517284d17b4942e3f7cdd50e0d70bd62a606465a464ee4f078c790990854f5d.scope: Deactivated successfully.
Dec 05 10:03:54 compute-0 podman[220040]: 2025-12-05 10:03:54.269425226 +0000 UTC m=+0.536503630 container died c517284d17b4942e3f7cdd50e0d70bd62a606465a464ee4f078c790990854f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 10:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9c5411dc451b19cbe639a75336e6616c4b261a67f70ba4bf70afc91198caa51-merged.mount: Deactivated successfully.
Dec 05 10:03:54 compute-0 podman[220040]: 2025-12-05 10:03:54.313688084 +0000 UTC m=+0.580766478 container remove c517284d17b4942e3f7cdd50e0d70bd62a606465a464ee4f078c790990854f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:03:54 compute-0 systemd[1]: libpod-conmon-c517284d17b4942e3f7cdd50e0d70bd62a606465a464ee4f078c790990854f5d.scope: Deactivated successfully.
Dec 05 10:03:54 compute-0 sudo[219760]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:54 compute-0 sudo[220216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:03:54 compute-0 sudo[220216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:54 compute-0 sudo[220216]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:54 compute-0 sudo[220195]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:54 compute-0 sudo[220242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:03:54 compute-0 sudo[220242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:03:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:54 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:54 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 05 10:03:54 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.025s CPU time.
Dec 05 10:03:54 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 05 10:03:54 compute-0 sudo[220470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzzphfxmdgdccifamuanuehrdpeyring ; FSID=3c63ce0f-5206-59ae-8381-b67d0b6424b5 KEY=AQAKqTJpAAAAABAAnLxgItl+ZCeyHPuze9T3Cw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929034.6464615-3476-41464565513156/AnsiballZ_command.py'
Dec 05 10:03:54 compute-0 sudo[220470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:54 compute-0 podman[220443]: 2025-12-05 10:03:54.946929734 +0000 UTC m=+0.044467357 container create 57dd109bda25eeeffaa902d4fb7afc8282fcfa6a3e20967e49b00cb6fac17975 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:03:55 compute-0 systemd[1]: Started libpod-conmon-57dd109bda25eeeffaa902d4fb7afc8282fcfa6a3e20967e49b00cb6fac17975.scope.
Dec 05 10:03:55 compute-0 podman[220443]: 2025-12-05 10:03:54.925823091 +0000 UTC m=+0.023360744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:03:55 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:03:55 compute-0 podman[220443]: 2025-12-05 10:03:55.042626069 +0000 UTC m=+0.140163712 container init 57dd109bda25eeeffaa902d4fb7afc8282fcfa6a3e20967e49b00cb6fac17975 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_beaver, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 10:03:55 compute-0 podman[220443]: 2025-12-05 10:03:55.051730175 +0000 UTC m=+0.149267798 container start 57dd109bda25eeeffaa902d4fb7afc8282fcfa6a3e20967e49b00cb6fac17975 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Dec 05 10:03:55 compute-0 podman[220443]: 2025-12-05 10:03:55.054870381 +0000 UTC m=+0.152408034 container attach 57dd109bda25eeeffaa902d4fb7afc8282fcfa6a3e20967e49b00cb6fac17975 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_beaver, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 10:03:55 compute-0 great_beaver[220478]: 167 167
Dec 05 10:03:55 compute-0 systemd[1]: libpod-57dd109bda25eeeffaa902d4fb7afc8282fcfa6a3e20967e49b00cb6fac17975.scope: Deactivated successfully.
Dec 05 10:03:55 compute-0 podman[220443]: 2025-12-05 10:03:55.057840502 +0000 UTC m=+0.155378125 container died 57dd109bda25eeeffaa902d4fb7afc8282fcfa6a3e20967e49b00cb6fac17975 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-26810ae1a17ad17dd37b8597e70ff504bc5846c61583ac98ad3e715058f45ee2-merged.mount: Deactivated successfully.
Dec 05 10:03:55 compute-0 podman[220443]: 2025-12-05 10:03:55.096367126 +0000 UTC m=+0.193904749 container remove 57dd109bda25eeeffaa902d4fb7afc8282fcfa6a3e20967e49b00cb6fac17975 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 10:03:55 compute-0 systemd[1]: libpod-conmon-57dd109bda25eeeffaa902d4fb7afc8282fcfa6a3e20967e49b00cb6fac17975.scope: Deactivated successfully.
Dec 05 10:03:55 compute-0 polkitd[43445]: Registered Authentication Agent for unix-process:220496:380521 (system bus name :1.2868 [pkttyagent --process 220496 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 05 10:03:55 compute-0 polkitd[43445]: Unregistered Authentication Agent for unix-process:220496:380521 (system bus name :1.2868, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 05 10:03:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:03:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:55.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:03:55 compute-0 sudo[220470]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:55 compute-0 podman[220507]: 2025-12-05 10:03:55.274453875 +0000 UTC m=+0.044075066 container create 362bea2c543dcf16b68b8b9540d96def5d3c1697fd1bf930765def29204a7b1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_roentgen, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:03:55 compute-0 systemd[1]: Started libpod-conmon-362bea2c543dcf16b68b8b9540d96def5d3c1697fd1bf930765def29204a7b1a.scope.
Dec 05 10:03:55 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00667b39084fb1c4a17f2d6b38674a3cf9e13e5c8ec5318cd7d4479e62f0348/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00667b39084fb1c4a17f2d6b38674a3cf9e13e5c8ec5318cd7d4479e62f0348/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00667b39084fb1c4a17f2d6b38674a3cf9e13e5c8ec5318cd7d4479e62f0348/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00667b39084fb1c4a17f2d6b38674a3cf9e13e5c8ec5318cd7d4479e62f0348/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:55 compute-0 podman[220507]: 2025-12-05 10:03:55.259732926 +0000 UTC m=+0.029354137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:03:55 compute-0 podman[220507]: 2025-12-05 10:03:55.358953266 +0000 UTC m=+0.128574487 container init 362bea2c543dcf16b68b8b9540d96def5d3c1697fd1bf930765def29204a7b1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_roentgen, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 10:03:55 compute-0 podman[220507]: 2025-12-05 10:03:55.365470332 +0000 UTC m=+0.135091533 container start 362bea2c543dcf16b68b8b9540d96def5d3c1697fd1bf930765def29204a7b1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 10:03:55 compute-0 podman[220507]: 2025-12-05 10:03:55.368537795 +0000 UTC m=+0.138159006 container attach 362bea2c543dcf16b68b8b9540d96def5d3c1697fd1bf930765def29204a7b1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:03:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:55 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003bc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:55 compute-0 ceph-mon[74418]: pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]: {
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:     "1": [
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:         {
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             "devices": [
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "/dev/loop3"
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             ],
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             "lv_name": "ceph_lv0",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             "lv_size": "21470642176",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             "name": "ceph_lv0",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             "tags": {
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.cluster_name": "ceph",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.crush_device_class": "",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.encrypted": "0",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.osd_id": "1",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.type": "block",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.vdo": "0",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:                 "ceph.with_tpm": "0"
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             },
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             "type": "block",
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:             "vg_name": "ceph_vg0"
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:         }
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]:     ]
Dec 05 10:03:55 compute-0 trusting_roentgen[220546]: }
Dec 05 10:03:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:55] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 10:03:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:03:55] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec 05 10:03:55 compute-0 systemd[1]: libpod-362bea2c543dcf16b68b8b9540d96def5d3c1697fd1bf930765def29204a7b1a.scope: Deactivated successfully.
Dec 05 10:03:55 compute-0 podman[220507]: 2025-12-05 10:03:55.670793651 +0000 UTC m=+0.440414862 container died 362bea2c543dcf16b68b8b9540d96def5d3c1697fd1bf930765def29204a7b1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_roentgen, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:03:55 compute-0 sudo[220680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grrruztzvtnhhljezfztgqaoytbxczop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929035.4190161-3500-165818545262303/AnsiballZ_copy.py'
Dec 05 10:03:55 compute-0 sudo[220680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a00667b39084fb1c4a17f2d6b38674a3cf9e13e5c8ec5318cd7d4479e62f0348-merged.mount: Deactivated successfully.
Dec 05 10:03:55 compute-0 podman[220507]: 2025-12-05 10:03:55.709269145 +0000 UTC m=+0.478890336 container remove 362bea2c543dcf16b68b8b9540d96def5d3c1697fd1bf930765def29204a7b1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_roentgen, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 10:03:55 compute-0 systemd[1]: libpod-conmon-362bea2c543dcf16b68b8b9540d96def5d3c1697fd1bf930765def29204a7b1a.scope: Deactivated successfully.
Dec 05 10:03:55 compute-0 sudo[220242]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:55 compute-0 sudo[220696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:03:55 compute-0 sudo[220696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:55 compute-0 sudo[220696]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:55 compute-0 sudo[220721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:03:55 compute-0 sudo[220721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:55 compute-0 python3.9[220684]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:55 compute-0 sudo[220680]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:56 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08400bc80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:56 compute-0 podman[220863]: 2025-12-05 10:03:56.250686895 +0000 UTC m=+0.044967360 container create 81c6bb0fd24e07aff2ca6ef94bde00253f0a681034f9db31b9a626bcae0e8018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 05 10:03:56 compute-0 systemd[1]: Started libpod-conmon-81c6bb0fd24e07aff2ca6ef94bde00253f0a681034f9db31b9a626bcae0e8018.scope.
Dec 05 10:03:56 compute-0 podman[220863]: 2025-12-05 10:03:56.22650133 +0000 UTC m=+0.020781865 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:03:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:03:56 compute-0 podman[220863]: 2025-12-05 10:03:56.343216634 +0000 UTC m=+0.137497139 container init 81c6bb0fd24e07aff2ca6ef94bde00253f0a681034f9db31b9a626bcae0e8018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:03:56 compute-0 podman[220863]: 2025-12-05 10:03:56.352217619 +0000 UTC m=+0.146498084 container start 81c6bb0fd24e07aff2ca6ef94bde00253f0a681034f9db31b9a626bcae0e8018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_zhukovsky, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:03:56 compute-0 podman[220863]: 2025-12-05 10:03:56.355183799 +0000 UTC m=+0.149464264 container attach 81c6bb0fd24e07aff2ca6ef94bde00253f0a681034f9db31b9a626bcae0e8018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:03:56 compute-0 clever_zhukovsky[220909]: 167 167
Dec 05 10:03:56 compute-0 systemd[1]: libpod-81c6bb0fd24e07aff2ca6ef94bde00253f0a681034f9db31b9a626bcae0e8018.scope: Deactivated successfully.
Dec 05 10:03:56 compute-0 podman[220863]: 2025-12-05 10:03:56.360207535 +0000 UTC m=+0.154488010 container died 81c6bb0fd24e07aff2ca6ef94bde00253f0a681034f9db31b9a626bcae0e8018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 05 10:03:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a8ca8a6d9d4fd27a39f98cff8bedbfaec75360e48def30dbbdd6e110b9a2359-merged.mount: Deactivated successfully.
Dec 05 10:03:56 compute-0 podman[220863]: 2025-12-05 10:03:56.391863774 +0000 UTC m=+0.186144249 container remove 81c6bb0fd24e07aff2ca6ef94bde00253f0a681034f9db31b9a626bcae0e8018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_zhukovsky, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 10:03:56 compute-0 sudo[220960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwqoctxbtugwpswuobbwrkbevkmbtqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929036.1288877-3524-255615927999564/AnsiballZ_stat.py'
Dec 05 10:03:56 compute-0 sudo[220960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:56 compute-0 systemd[1]: libpod-conmon-81c6bb0fd24e07aff2ca6ef94bde00253f0a681034f9db31b9a626bcae0e8018.scope: Deactivated successfully.
Dec 05 10:03:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:03:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:56 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:56 compute-0 podman[220979]: 2025-12-05 10:03:56.583019406 +0000 UTC m=+0.038564257 container create 004304fb0ad3a998b8455110e74017b226ac339ab0af2693756ff70ffc521c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 10:03:56 compute-0 systemd[1]: Started libpod-conmon-004304fb0ad3a998b8455110e74017b226ac339ab0af2693756ff70ffc521c98.scope.
Dec 05 10:03:56 compute-0 python3.9[220971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:56 compute-0 sudo[220960]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc069d0335b28ee631a2530834c9268ae607576e7cd5007f83dd087ba55a2b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc069d0335b28ee631a2530834c9268ae607576e7cd5007f83dd087ba55a2b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc069d0335b28ee631a2530834c9268ae607576e7cd5007f83dd087ba55a2b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc069d0335b28ee631a2530834c9268ae607576e7cd5007f83dd087ba55a2b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:03:56 compute-0 podman[220979]: 2025-12-05 10:03:56.565613605 +0000 UTC m=+0.021158476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:03:56 compute-0 podman[220979]: 2025-12-05 10:03:56.667879037 +0000 UTC m=+0.123423938 container init 004304fb0ad3a998b8455110e74017b226ac339ab0af2693756ff70ffc521c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 05 10:03:56 compute-0 podman[220979]: 2025-12-05 10:03:56.680834899 +0000 UTC m=+0.136379750 container start 004304fb0ad3a998b8455110e74017b226ac339ab0af2693756ff70ffc521c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 10:03:56 compute-0 podman[220979]: 2025-12-05 10:03:56.689573266 +0000 UTC m=+0.145118117 container attach 004304fb0ad3a998b8455110e74017b226ac339ab0af2693756ff70ffc521c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Dec 05 10:03:56 compute-0 sudo[221134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srilvltbbfhnmquclrukuxpsgewemexk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929036.1288877-3524-255615927999564/AnsiballZ_copy.py'
Dec 05 10:03:56 compute-0 sudo[221134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:03:57.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:03:57 compute-0 python3.9[221138]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764929036.1288877-3524-255615927999564/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:57 compute-0 sudo[221134]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:03:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:57.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:03:57 compute-0 lvm[221216]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:03:57 compute-0 lvm[221216]: VG ceph_vg0 finished
Dec 05 10:03:57 compute-0 sad_goldwasser[220995]: {}
Dec 05 10:03:57 compute-0 systemd[1]: libpod-004304fb0ad3a998b8455110e74017b226ac339ab0af2693756ff70ffc521c98.scope: Deactivated successfully.
Dec 05 10:03:57 compute-0 systemd[1]: libpod-004304fb0ad3a998b8455110e74017b226ac339ab0af2693756ff70ffc521c98.scope: Consumed 1.153s CPU time.
Dec 05 10:03:57 compute-0 podman[220979]: 2025-12-05 10:03:57.42291657 +0000 UTC m=+0.878461451 container died 004304fb0ad3a998b8455110e74017b226ac339ab0af2693756ff70ffc521c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 10:03:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfc069d0335b28ee631a2530834c9268ae607576e7cd5007f83dd087ba55a2b3-merged.mount: Deactivated successfully.
Dec 05 10:03:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:03:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:03:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:57 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4003b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:03:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:03:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:03:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:03:57 compute-0 ceph-mon[74418]: pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:03:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:03:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:03:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:03:57 compute-0 podman[220979]: 2025-12-05 10:03:57.643024778 +0000 UTC m=+1.098569629 container remove 004304fb0ad3a998b8455110e74017b226ac339ab0af2693756ff70ffc521c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldwasser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Dec 05 10:03:57 compute-0 sudo[220721]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:57 compute-0 systemd[1]: libpod-conmon-004304fb0ad3a998b8455110e74017b226ac339ab0af2693756ff70ffc521c98.scope: Deactivated successfully.
Dec 05 10:03:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:03:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:03:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:57 compute-0 sudo[221330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:03:57 compute-0 sudo[221330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:03:57 compute-0 sudo[221330]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:57 compute-0 sudo[221382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pueetiodyfzbciwylregtopnyaibfucf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929037.527419-3572-276230298084601/AnsiballZ_file.py'
Dec 05 10:03:57 compute-0 sudo[221382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:58 compute-0 python3.9[221384]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:58 compute-0 sudo[221382]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:58 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003bc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:03:58.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:03:58 compute-0 sudo[221536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eryiitvqgimpldislirsqeegkjftvvdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929038.2235985-3596-40153468528691/AnsiballZ_stat.py'
Dec 05 10:03:58 compute-0 sudo[221536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:58 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:58 compute-0 python3.9[221538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:03:58 compute-0 ceph-mon[74418]: pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:03:58 compute-0 sudo[221536]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:58 compute-0 sudo[221614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnbfocozvyjifhmrbrdmwjbspeywwjra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929038.2235985-3596-40153468528691/AnsiballZ_file.py'
Dec 05 10:03:58 compute-0 sudo[221614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:59 compute-0 python3.9[221616]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:03:59 compute-0 sudo[221614]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:03:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:03:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:03:59.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:03:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:03:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:03:59 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08400bc80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:03:59 compute-0 sudo[221781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqosiyfpoassnmloswqgqiahkfkyqola ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929039.4436922-3632-1355732175701/AnsiballZ_stat.py'
Dec 05 10:03:59 compute-0 sudo[221781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:03:59 compute-0 podman[221740]: 2025-12-05 10:03:59.75593446 +0000 UTC m=+0.091470902 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec 05 10:03:59 compute-0 python3.9[221788]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:03:59 compute-0 sudo[221781]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:04:00 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4004450 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:00 compute-0 sudo[221871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjoahfopvxfeaicjnxnwnbyafrhtzjmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929039.4436922-3632-1355732175701/AnsiballZ_file.py'
Dec 05 10:04:00 compute-0 sudo[221871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:00.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:00 compute-0 python3.9[221873]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.duumvik3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:00 compute-0 sudo[221871]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:04:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:04:00 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08c003bc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:00 compute-0 sudo[222024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugxaaxcfltcdkcesyaajrzaidljqigcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929040.5445547-3668-222344506940918/AnsiballZ_stat.py'
Dec 05 10:04:00 compute-0 sudo[222024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:00 compute-0 python3.9[222026]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:04:01 compute-0 sudo[222024]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:04:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:01.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:04:01 compute-0 sudo[222102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbhfrqopbznyuxbcvlnrruwpttffmlgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929040.5445547-3668-222344506940918/AnsiballZ_file.py'
Dec 05 10:04:01 compute-0 sudo[222102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:01 compute-0 python3.9[222104]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:01 compute-0 sudo[222102]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:04:01 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0b400a0c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:01 compute-0 ceph-mon[74418]: pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:04:02 compute-0 sudo[222254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqrjuzbosptqutqfficnlxdlcjocgjpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929041.7997837-3707-171781835182251/AnsiballZ_command.py'
Dec 05 10:04:02 compute-0 sudo[222254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:04:02 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd08400bc80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:04:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:02.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:04:02 compute-0 python3.9[222256]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:04:02 compute-0 sudo[222254]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:02 compute-0 kernel: ganesha.nfsd[200374]: segfault at 50 ip 00007fd163a0c32e sp 00007fd1317f9210 error 4 in libntirpc.so.5.8[7fd1639f1000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 05 10:04:02 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 05 10:04:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[168371]: 05/12/2025 10:04:02 : epoch 6932ad15 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0a4004450 fd 42 proxy ignored for local
Dec 05 10:04:02 compute-0 systemd[1]: Started Process Core Dump (PID 222314/UID 0).
Dec 05 10:04:02 compute-0 sudo[222411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvglftctaivjrggcqjzeucchvcvcelmy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764929042.5641992-3731-280257330239783/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 10:04:02 compute-0 sudo[222411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:03.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:03 compute-0 python3[222413]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 10:04:03 compute-0 sudo[222411]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:03 compute-0 ceph-mon[74418]: pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:03 compute-0 sudo[222563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjhszubcnkdmxljlwsospaesextxnynd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929043.470065-3755-212732271838092/AnsiballZ_stat.py'
Dec 05 10:04:03 compute-0 sudo[222563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:03 compute-0 systemd-coredump[222329]: Process 168375 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 61:
                                                    #0  0x00007fd163a0c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 05 10:04:03 compute-0 python3.9[222565]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:04:04 compute-0 sudo[222563]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:04 compute-0 systemd[1]: systemd-coredump@4-222314-0.service: Deactivated successfully.
Dec 05 10:04:04 compute-0 systemd[1]: systemd-coredump@4-222314-0.service: Consumed 1.263s CPU time.
Dec 05 10:04:04 compute-0 podman[222572]: 2025-12-05 10:04:04.08276131 +0000 UTC m=+0.032094841 container died 8ab60eb67dd7aac53c686233e020897e2dfda89edd71f5c454cc0418d6c97a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c253964fcd43cec8e04a95a2ae86cb0a8aa88e82cbafd9dfa3864596e1e214e-merged.mount: Deactivated successfully.
Dec 05 10:04:04 compute-0 podman[222572]: 2025-12-05 10:04:04.120973157 +0000 UTC m=+0.070306618 container remove 8ab60eb67dd7aac53c686233e020897e2dfda89edd71f5c454cc0418d6c97a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:04:04 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Main process exited, code=exited, status=139/n/a
Dec 05 10:04:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:04.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:04 compute-0 sudo[222675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwpdgyhulbhswbtdbvkamvkvxggvktrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929043.470065-3755-212732271838092/AnsiballZ_file.py'
Dec 05 10:04:04 compute-0 sudo[222675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:04 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Failed with result 'exit-code'.
Dec 05 10:04:04 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 1.823s CPU time.
Dec 05 10:04:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:04 compute-0 python3.9[222678]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:04 compute-0 sudo[222675]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:04:05 compute-0 sudo[222842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qidsrfyxlbmfwcjdmpupxnhhfhvvtvzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929044.759198-3791-214971839781796/AnsiballZ_stat.py'
Dec 05 10:04:05 compute-0 sudo[222842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:04:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:05.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:04:05 compute-0 python3.9[222844]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:04:05 compute-0 sudo[222842]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:05 compute-0 sudo[222920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpfkuzrryfzzwbwsefsefmgvhrwndznu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929044.759198-3791-214971839781796/AnsiballZ_file.py'
Dec 05 10:04:05 compute-0 sudo[222920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:05 compute-0 ceph-mon[74418]: pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:04:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:05] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:04:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:05] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:04:05 compute-0 python3.9[222922]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:05 compute-0 sudo[222920]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:06.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:06 compute-0 sudo[223074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojgoaqxubzzjwxkrpgmtqefmoloxecuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929046.1027725-3827-184176135711328/AnsiballZ_stat.py'
Dec 05 10:04:06 compute-0 sudo[223074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:06 compute-0 python3.9[223076]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:04:06 compute-0 sudo[223074]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:06 compute-0 sudo[223152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaxlksmwoygmpefnksxahbscppixjhvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929046.1027725-3827-184176135711328/AnsiballZ_file.py'
Dec 05 10:04:06 compute-0 sudo[223152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:04:07.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:04:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:04:07.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:04:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:04:07.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:04:07 compute-0 python3.9[223154]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:07 compute-0 sudo[223152]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:07.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:07 compute-0 ceph-mon[74418]: pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:07 compute-0 sudo[223304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkojtntkhdzilaeksrgpibqjnommnglf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929047.3929932-3863-75198804480807/AnsiballZ_stat.py'
Dec 05 10:04:07 compute-0 sudo[223304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:07 compute-0 python3.9[223306]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:04:07 compute-0 sudo[223304]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:08 compute-0 sudo[223383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjlabqslnlgqmpquohejkghvcyyhizws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929047.3929932-3863-75198804480807/AnsiballZ_file.py'
Dec 05 10:04:08 compute-0 sudo[223383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:08.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:08 compute-0 python3.9[223385]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:08 compute-0 sudo[223383]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100408 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:04:09 compute-0 sudo[223509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:04:09 compute-0 sudo[223509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:04:09 compute-0 sudo[223509]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:09 compute-0 sudo[223560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwipbubmwxaoejresucqkaaeyjrmshgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929048.6372263-3899-20824717572181/AnsiballZ_stat.py'
Dec 05 10:04:09 compute-0 sudo[223560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:09.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:09 compute-0 python3.9[223563]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:04:09 compute-0 sudo[223560]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:09 compute-0 sudo[223686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyrcctdcjackgphxaboasrcszxcjjfaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929048.6372263-3899-20824717572181/AnsiballZ_copy.py'
Dec 05 10:04:09 compute-0 sudo[223686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:09 compute-0 ceph-mon[74418]: pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:09 compute-0 python3.9[223688]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764929048.6372263-3899-20824717572181/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:09 compute-0 sudo[223686]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:10.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:10 compute-0 sudo[223839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxseuuekubftttaroidchdlfqfyyprir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929050.0071595-3944-44889870159193/AnsiballZ_file.py'
Dec 05 10:04:10 compute-0 sudo[223839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:10 compute-0 python3.9[223841]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:10 compute-0 sudo[223839]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:04:10 compute-0 sudo[223992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpnqakwokqmpvylmamfopddklsfuhcny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929050.701133-3968-220773402234497/AnsiballZ_command.py'
Dec 05 10:04:10 compute-0 sudo[223992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:11 compute-0 python3.9[223994]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:04:11 compute-0 sudo[223992]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:11.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:11 compute-0 ceph-mon[74418]: pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:04:12 compute-0 sudo[224147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugfcwcdkbcdpooygdnzaawoxdlocmreq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929051.4797397-3992-180216289567366/AnsiballZ_blockinfile.py'
Dec 05 10:04:12 compute-0 sudo[224147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:04:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:12.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:04:12 compute-0 python3.9[224149]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:12 compute-0 sudo[224147]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:04:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:04:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:04:12 compute-0 ceph-mon[74418]: pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:04:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:04:12 compute-0 sudo[224301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyjjvinwbwzvvurxvsftmeubrtwkcwox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929052.5263577-4019-21156370156684/AnsiballZ_command.py'
Dec 05 10:04:12 compute-0 sudo[224301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:12 compute-0 python3.9[224303]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:04:13 compute-0 sudo[224301]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:13.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:13 compute-0 sudo[224454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlbamaqlalfqqczqgopjqdbgmnatkypy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929053.2201633-4043-250742707211034/AnsiballZ_stat.py'
Dec 05 10:04:13 compute-0 sudo[224454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:13 compute-0 python3.9[224456]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:04:13 compute-0 sudo[224454]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:14.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:14 compute-0 sudo[224609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbkbzvjpgkuainuklxlhombzifcsygso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929053.9357417-4067-250454990462407/AnsiballZ_command.py'
Dec 05 10:04:14 compute-0 sudo[224609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:14 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Scheduled restart job, restart counter is at 5.
Dec 05 10:04:14 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 10:04:14 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 1.823s CPU time.
Dec 05 10:04:14 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 10:04:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:14 compute-0 python3.9[224611]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:04:14 compute-0 sudo[224609]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:14 compute-0 podman[224686]: 2025-12-05 10:04:14.623584142 +0000 UTC m=+0.050206222 container create 35030d0766c4ac8c848462d73c3403b87a7518f0d5fc5abd6496a97acf4318c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 10:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac215173f0155d4f8368345719bb625565f388c4755fcbc8d90946437730257/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 05 10:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac215173f0155d4f8368345719bb625565f388c4755fcbc8d90946437730257/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac215173f0155d4f8368345719bb625565f388c4755fcbc8d90946437730257/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac215173f0155d4f8368345719bb625565f388c4755fcbc8d90946437730257/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hocvro-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:04:14 compute-0 podman[224686]: 2025-12-05 10:04:14.685999254 +0000 UTC m=+0.112621354 container init 35030d0766c4ac8c848462d73c3403b87a7518f0d5fc5abd6496a97acf4318c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:04:14 compute-0 podman[224686]: 2025-12-05 10:04:14.693320543 +0000 UTC m=+0.119942623 container start 35030d0766c4ac8c848462d73c3403b87a7518f0d5fc5abd6496a97acf4318c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 10:04:14 compute-0 bash[224686]: 35030d0766c4ac8c848462d73c3403b87a7518f0d5fc5abd6496a97acf4318c6
Dec 05 10:04:14 compute-0 podman[224686]: 2025-12-05 10:04:14.607314511 +0000 UTC m=+0.033936611 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:04:14 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 10:04:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 05 10:04:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 05 10:04:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 05 10:04:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 05 10:04:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 05 10:04:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 05 10:04:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 05 10:04:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:04:14 compute-0 sudo[224868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqfnavwrceeaqsmvkeifjyqrffmbjmaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929054.670298-4091-159870562045139/AnsiballZ_file.py'
Dec 05 10:04:14 compute-0 sudo[224868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:15 compute-0 python3.9[224870]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:15 compute-0 sudo[224868]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:04:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:15.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:04:15 compute-0 ceph-mon[74418]: pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:15] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:04:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:15] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:04:15 compute-0 sudo[225020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezezcpgcdjkfncerlkyryummybojedoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929055.3844-4115-242695520374499/AnsiballZ_stat.py'
Dec 05 10:04:15 compute-0 sudo[225020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:15 compute-0 python3.9[225022]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:04:15 compute-0 sudo[225020]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:16.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:16 compute-0 sudo[225144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-capijillfnrdtlsopusurbjlfisqhtcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929055.3844-4115-242695520374499/AnsiballZ_copy.py'
Dec 05 10:04:16 compute-0 sudo[225144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:16 compute-0 python3.9[225146]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764929055.3844-4115-242695520374499/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:16 compute-0 sudo[225144]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:04:16 compute-0 sudo[225297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jinxntxpnyjwtlluramjlznwylmxdzjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929056.6414587-4160-240737532208667/AnsiballZ_stat.py'
Dec 05 10:04:16 compute-0 sudo[225297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:04:17.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:04:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:04:17.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:04:17 compute-0 python3.9[225299]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:04:17 compute-0 sudo[225297]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:17.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:17 compute-0 sudo[225420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpafvbtqyqvwssjapjnefhexvkkgdqbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929056.6414587-4160-240737532208667/AnsiballZ_copy.py'
Dec 05 10:04:17 compute-0 sudo[225420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec 05 10:04:17 compute-0 ceph-mon[74418]: pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:04:17 compute-0 python3.9[225422]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764929056.6414587-4160-240737532208667/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:17 compute-0 sudo[225420]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:18 compute-0 sudo[225573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmzxiaktfnbkryzqfcqislqsvxricxsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929057.8855395-4205-60879812427511/AnsiballZ_stat.py'
Dec 05 10:04:18 compute-0 sudo[225573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:18.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:18 compute-0 python3.9[225575]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:04:18 compute-0 sudo[225573]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:04:18 compute-0 sudo[225697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obarhoohtustwmcgudfuvamfbvavcwwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929057.8855395-4205-60879812427511/AnsiballZ_copy.py'
Dec 05 10:04:18 compute-0 sudo[225697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:18 compute-0 python3.9[225699]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764929057.8855395-4205-60879812427511/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:18 compute-0 sudo[225697]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:19.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:19 compute-0 sudo[225849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwmfmosxdupzipgztemoebrahsyowihy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929059.245219-4250-155415985231519/AnsiballZ_systemd.py'
Dec 05 10:04:19 compute-0 sudo[225849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:19 compute-0 python3.9[225851]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:04:19 compute-0 systemd[1]: Reloading.
Dec 05 10:04:19 compute-0 ceph-mon[74418]: pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:04:19 compute-0 systemd-rc-local-generator[225876]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:04:19 compute-0 systemd-sysv-generator[225880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:04:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:04:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:20.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:04:20 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 05 10:04:20 compute-0 sudo[225849]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:04:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:04:20.558 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:04:20.559 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:04:20.560 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:04:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:04:20 compute-0 sudo[226041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mahjhsysgtyjeelwvxzruzbfznphvccz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929060.5987568-4274-241035593396594/AnsiballZ_systemd.py'
Dec 05 10:04:20 compute-0 sudo[226041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:20 compute-0 ceph-mon[74418]: pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:04:21 compute-0 python3.9[226043]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 10:04:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:21.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:21 compute-0 systemd[1]: Reloading.
Dec 05 10:04:21 compute-0 systemd-sysv-generator[226072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:04:21 compute-0 systemd-rc-local-generator[226068]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:04:21 compute-0 systemd[1]: Reloading.
Dec 05 10:04:22 compute-0 systemd-rc-local-generator[226110]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:04:22 compute-0 systemd-sysv-generator[226114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:04:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:22 compute-0 sudo[226041]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:04:22 compute-0 sshd-session[165580]: Connection closed by 192.168.122.30 port 50944
Dec 05 10:04:22 compute-0 sshd-session[165577]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:04:22 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec 05 10:04:22 compute-0 systemd[1]: session-53.scope: Consumed 3min 30.002s CPU time.
Dec 05 10:04:22 compute-0 systemd-logind[789]: Session 53 logged out. Waiting for processes to exit.
Dec 05 10:04:22 compute-0 systemd-logind[789]: Removed session 53.
Dec 05 10:04:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:04:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:23.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:04:23 compute-0 ceph-mon[74418]: pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:04:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100423 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:04:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:04:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:04:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:24 compute-0 podman[226144]: 2025-12-05 10:04:24.397582672 +0000 UTC m=+0.068454088 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec 05 10:04:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Dec 05 10:04:24 compute-0 ceph-mon[74418]: pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Dec 05 10:04:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:25.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:25] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:04:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:25] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:04:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:26.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 05 10:04:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 10:04:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:04:27.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:04:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:04:27.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:04:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:27.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:04:27
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'volumes']
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:04:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:04:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:04:27 compute-0 ceph-mon[74418]: pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Dec 05 10:04:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:04:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:04:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:04:27 compute-0 sshd-session[226179]: Accepted publickey for zuul from 192.168.122.30 port 51968 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 10:04:27 compute-0 systemd-logind[789]: New session 54 of user zuul.
Dec 05 10:04:27 compute-0 systemd[1]: Started Session 54 of User zuul.
Dec 05 10:04:27 compute-0 sshd-session[226179]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 10:04:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:04:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:28.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:04:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Dec 05 10:04:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:28 compute-0 python3.9[226337]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 10:04:29 compute-0 sudo[226342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:04:29 compute-0 sudo[226342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:04:29 compute-0 sudo[226342]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:29.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:29 compute-0 ceph-mon[74418]: pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Dec 05 10:04:30 compute-0 podman[226490]: 2025-12-05 10:04:30.027245257 +0000 UTC m=+0.093962178 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:04:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:30 compute-0 python3.9[226526]: ansible-ansible.builtin.service_facts Invoked
Dec 05 10:04:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:04:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:30.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:04:30 compute-0 network[226557]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 10:04:30 compute-0 network[226558]: 'network-scripts' will be removed from distribution in near future.
Dec 05 10:04:30 compute-0 network[226559]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 10:04:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:04:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100430 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:04:30 compute-0 ceph-mon[74418]: pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:04:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:04:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:31.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:04:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:32.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Dec 05 10:04:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:04:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:33.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:33 compute-0 ceph-mon[74418]: pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Dec 05 10:04:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:34 compute-0 sudo[226832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rudajrqcqdfwonsdcedscisryaeursnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929073.826662-101-15358363098242/AnsiballZ_setup.py'
Dec 05 10:04:34 compute-0 sudo[226832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb00016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:04:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:34.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:04:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:34 compute-0 python3.9[226835]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 10:04:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:04:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:34 compute-0 sudo[226832]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:35 compute-0 sudo[226918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgdzfuoxzglplahsttmjxdosnxqwekxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929073.826662-101-15358363098242/AnsiballZ_dnf.py'
Dec 05 10:04:35 compute-0 sudo[226918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:35.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:35 compute-0 python3.9[226920]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 10:04:35 compute-0 ceph-mon[74418]: pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:04:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:35] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 10:04:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:35] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 10:04:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:04:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:04:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:04:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:36.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:04:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:04:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb00016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:04:37.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:04:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:37.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:37 compute-0 ceph-mon[74418]: pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:04:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:38.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:04:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:04:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:39.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:39 compute-0 ceph-mon[74418]: pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:04:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:04:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:40.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:04:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:04:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:40 compute-0 ceph-mon[74418]: pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:04:40 compute-0 sudo[226918]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:41.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:41 compute-0 sudo[227077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyssjngqjmxmuavqluttajgjkerfakrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929081.1495335-137-94665971226997/AnsiballZ_stat.py'
Dec 05 10:04:41 compute-0 sudo[227077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:41 compute-0 python3.9[227079]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:04:41 compute-0 sudo[227077]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:42.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:42 compute-0 sudo[227231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puhzbqgrsgkusaudtifmdkhiovykasdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929082.1295624-167-149070293178760/AnsiballZ_command.py'
Dec 05 10:04:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:04:42 compute-0 sudo[227231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:04:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:04:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:04:42 compute-0 python3.9[227233]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:04:42 compute-0 sudo[227231]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:43.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:43 compute-0 ceph-mon[74418]: pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:04:43 compute-0 sudo[227384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvepbqdsxoorbztkztljuhjeqxltmvfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929083.4603312-197-207464277971322/AnsiballZ_stat.py'
Dec 05 10:04:43 compute-0 sudo[227384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:43 compute-0 python3.9[227386]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:04:43 compute-0 sudo[227384]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:04:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:44.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:04:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:44 compute-0 sudo[227538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olhuoouzrvyuuclctouywxqdsmlmjtrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929084.2724423-221-38217130829503/AnsiballZ_command.py'
Dec 05 10:04:44 compute-0 sudo[227538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:04:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:44 compute-0 python3.9[227540]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:04:44 compute-0 sudo[227538]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:45 compute-0 sudo[227691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avkppxzdribrhdjledzxobzhduzqsaef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929084.9068372-245-263428843981631/AnsiballZ_stat.py'
Dec 05 10:04:45 compute-0 sudo[227691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:04:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:45.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:04:45 compute-0 python3.9[227693]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:04:45 compute-0 sudo[227691]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:45] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:04:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:45] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:04:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:45 compute-0 ceph-mon[74418]: pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:04:45 compute-0 sudo[227814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yapabyjzpimrsulqjhrtkhdzjmosyobl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929084.9068372-245-263428843981631/AnsiballZ_copy.py'
Dec 05 10:04:45 compute-0 sudo[227814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100446 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:04:46 compute-0 python3.9[227816]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764929084.9068372-245-263428843981631/.source.iscsi _original_basename=.g9x7tcxb follow=False checksum=d095d713ba684750695e59d76de9264c5cd15997 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:46 compute-0 sudo[227814]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:46.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:04:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:46 compute-0 sudo[227968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekqkqlskffsdowcwikrmrxlztqrsgafn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929086.308134-290-266273475863510/AnsiballZ_file.py'
Dec 05 10:04:46 compute-0 sudo[227968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:46 compute-0 python3.9[227970]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:46 compute-0 sudo[227968]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:04:47.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:04:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:04:47.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:04:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:47.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:47 compute-0 sudo[228120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpmwcqlwehszkyugtcxvkypswiorrudr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929087.1781726-314-266363974392228/AnsiballZ_lineinfile.py'
Dec 05 10:04:47 compute-0 sudo[228120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:47 compute-0 ceph-mon[74418]: pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:04:47 compute-0 python3.9[228122]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:47 compute-0 sudo[228120]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:48.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:04:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:48 compute-0 sudo[228274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymwrjyqplvescvudmrucpmcqumskfful ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929088.146896-341-84618263210178/AnsiballZ_systemd_service.py'
Dec 05 10:04:48 compute-0 sudo[228274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:48 compute-0 ceph-mon[74418]: pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:04:49 compute-0 python3.9[228276]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:04:49 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 05 10:04:49 compute-0 sudo[228274]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:49 compute-0 sudo[228279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:04:49 compute-0 sudo[228279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:04:49 compute-0 sudo[228279]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:49.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:49 compute-0 sudo[228455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tujgcxaxmabzysoztflivmrltoitrqra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929089.373343-365-280154357061446/AnsiballZ_systemd_service.py'
Dec 05 10:04:49 compute-0 sudo[228455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:49 compute-0 python3.9[228457]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:04:50 compute-0 systemd[1]: Reloading.
Dec 05 10:04:50 compute-0 systemd-rc-local-generator[228487]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:04:50 compute-0 systemd-sysv-generator[228490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:04:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:50.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:50 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 05 10:04:50 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 05 10:04:50 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 05 10:04:50 compute-0 systemd[1]: Started Open-iSCSI.
Dec 05 10:04:50 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 05 10:04:50 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 05 10:04:50 compute-0 sudo[228455]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:04:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:51.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:51 compute-0 sudo[228659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmnwujxlvbaizbhiigeobxfxgzytpflz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929091.010645-398-113679701356220/AnsiballZ_service_facts.py'
Dec 05 10:04:51 compute-0 sudo[228659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:51 compute-0 python3.9[228661]: ansible-ansible.builtin.service_facts Invoked
Dec 05 10:04:51 compute-0 network[228678]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 10:04:51 compute-0 network[228679]: 'network-scripts' will be removed from distribution in near future.
Dec 05 10:04:51 compute-0 network[228680]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 10:04:51 compute-0 ceph-mon[74418]: pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:04:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:52.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:04:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:53 compute-0 ceph-mon[74418]: pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:04:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:53.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:54.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:54 compute-0 podman[228764]: 2025-12-05 10:04:54.534094464 +0000 UTC m=+0.070536865 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:04:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:04:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.003000080s ======
Dec 05 10:04:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:55.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 05 10:04:55 compute-0 sudo[228659]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:55] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:04:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:04:55] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:04:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:55 compute-0 ceph-mon[74418]: pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:04:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:56.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:56 compute-0 sudo[228975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkihutxzkwqncgpwouokwsxrsfxcsgmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929095.720142-428-14010101926966/AnsiballZ_file.py'
Dec 05 10:04:56 compute-0 sudo[228975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:56 compute-0 python3.9[228977]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 10:04:56 compute-0 sudo[228975]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:04:57.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:04:57 compute-0 sudo[229128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfelkjcfjegriizorshluqqdszipwevg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929096.6488574-452-109663076803745/AnsiballZ_modprobe.py'
Dec 05 10:04:57 compute-0 sudo[229128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:57 compute-0 python3.9[229130]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 05 10:04:57 compute-0 sudo[229128]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:57.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:04:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:04:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:04:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:04:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:04:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:04:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:04:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:04:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:57 compute-0 ceph-mon[74418]: pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:04:57 compute-0 sudo[229284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtzgyqrqtpikeolkppgjnuzygxxmxnwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929097.497372-476-173199239383120/AnsiballZ_stat.py'
Dec 05 10:04:57 compute-0 sudo[229284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:58 compute-0 sudo[229287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:04:58 compute-0 python3.9[229286]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:04:58 compute-0 sudo[229287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:04:58 compute-0 sudo[229287]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:58 compute-0 sudo[229284]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:58 compute-0 sudo[229312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 05 10:04:58 compute-0 sudo[229312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:04:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:04:58.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 10:04:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 10:04:58 compute-0 sudo[229479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsxuctmdljmqvlckjbvmirytdndjxetg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929097.497372-476-173199239383120/AnsiballZ_copy.py'
Dec 05 10:04:58 compute-0 sudo[229479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:58 compute-0 sudo[229312]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:04:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:04:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:58 compute-0 sudo[229482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:04:58 compute-0 sudo[229482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:04:58 compute-0 sudo[229482]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:58 compute-0 sudo[229507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:04:58 compute-0 sudo[229507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:04:58 compute-0 python3.9[229481]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764929097.497372-476-173199239383120/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:58 compute-0 sudo[229479]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:59 compute-0 sudo[229507]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:04:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:04:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:04:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:04:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:04:59 compute-0 sudo[229714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcvyncuifcdlqukpzhahysicuhqmfuyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929098.9174247-524-42189435569584/AnsiballZ_lineinfile.py'
Dec 05 10:04:59 compute-0 sudo[229714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:04:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:04:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:04:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:04:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:04:59 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:04:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:04:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:04:59 compute-0 sudo[229717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:04:59 compute-0 sudo[229717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:04:59 compute-0 sudo[229717]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:59 compute-0 sudo[229742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:04:59 compute-0 sudo[229742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:04:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:04:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:04:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:04:59.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:04:59 compute-0 python3.9[229716]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:04:59 compute-0 sudo[229714]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:04:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:59 compute-0 ceph-mon[74418]: pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:04:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:04:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:04:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:04:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:04:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:04:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:04:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:04:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:04:59 compute-0 podman[229868]: 2025-12-05 10:04:59.710194131 +0000 UTC m=+0.040842578 container create 92b6df567ac2f7edf19bc77aee3944b290b0067f7d513e2130e93817f7477ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:04:59 compute-0 systemd[1]: Started libpod-conmon-92b6df567ac2f7edf19bc77aee3944b290b0067f7d513e2130e93817f7477ef0.scope.
Dec 05 10:04:59 compute-0 podman[229868]: 2025-12-05 10:04:59.690569829 +0000 UTC m=+0.021218256 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:04:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:04:59 compute-0 podman[229868]: 2025-12-05 10:04:59.811066136 +0000 UTC m=+0.141714633 container init 92b6df567ac2f7edf19bc77aee3944b290b0067f7d513e2130e93817f7477ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_keller, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Dec 05 10:04:59 compute-0 podman[229868]: 2025-12-05 10:04:59.819398202 +0000 UTC m=+0.150046619 container start 92b6df567ac2f7edf19bc77aee3944b290b0067f7d513e2130e93817f7477ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 05 10:04:59 compute-0 podman[229868]: 2025-12-05 10:04:59.823597666 +0000 UTC m=+0.154246183 container attach 92b6df567ac2f7edf19bc77aee3944b290b0067f7d513e2130e93817f7477ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_keller, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 05 10:04:59 compute-0 systemd[1]: libpod-92b6df567ac2f7edf19bc77aee3944b290b0067f7d513e2130e93817f7477ef0.scope: Deactivated successfully.
Dec 05 10:04:59 compute-0 unruffled_keller[229902]: 167 167
Dec 05 10:04:59 compute-0 podman[229868]: 2025-12-05 10:04:59.828272982 +0000 UTC m=+0.158921429 container died 92b6df567ac2f7edf19bc77aee3944b290b0067f7d513e2130e93817f7477ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 10:04:59 compute-0 conmon[229902]: conmon 92b6df567ac2f7edf19b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92b6df567ac2f7edf19bc77aee3944b290b0067f7d513e2130e93817f7477ef0.scope/container/memory.events
Dec 05 10:04:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-79553dbbe7a97540611086f0005aec45397ff99aed4997632b99166757aa4259-merged.mount: Deactivated successfully.
Dec 05 10:04:59 compute-0 podman[229868]: 2025-12-05 10:04:59.874985669 +0000 UTC m=+0.205634076 container remove 92b6df567ac2f7edf19bc77aee3944b290b0067f7d513e2130e93817f7477ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_keller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 10:04:59 compute-0 systemd[1]: libpod-conmon-92b6df567ac2f7edf19bc77aee3944b290b0067f7d513e2130e93817f7477ef0.scope: Deactivated successfully.
Dec 05 10:05:00 compute-0 podman[229928]: 2025-12-05 10:05:00.068181108 +0000 UTC m=+0.042913104 container create 6cae249776ca4bf22f8776ceb9ab4be2bb4213c8a37a4fdff2a2bc20c4119b20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hopper, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 10:05:00 compute-0 systemd[1]: Started libpod-conmon-6cae249776ca4bf22f8776ceb9ab4be2bb4213c8a37a4fdff2a2bc20c4119b20.scope.
Dec 05 10:05:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f6839316426511d63eca25ce27a74cf807110c05609cf86a33a4fd18fc467b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f6839316426511d63eca25ce27a74cf807110c05609cf86a33a4fd18fc467b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f6839316426511d63eca25ce27a74cf807110c05609cf86a33a4fd18fc467b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f6839316426511d63eca25ce27a74cf807110c05609cf86a33a4fd18fc467b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f6839316426511d63eca25ce27a74cf807110c05609cf86a33a4fd18fc467b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:00 compute-0 podman[229928]: 2025-12-05 10:05:00.141416224 +0000 UTC m=+0.116148220 container init 6cae249776ca4bf22f8776ceb9ab4be2bb4213c8a37a4fdff2a2bc20c4119b20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hopper, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:05:00 compute-0 podman[229928]: 2025-12-05 10:05:00.050416236 +0000 UTC m=+0.025148252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:05:00 compute-0 podman[229928]: 2025-12-05 10:05:00.149975275 +0000 UTC m=+0.124707271 container start 6cae249776ca4bf22f8776ceb9ab4be2bb4213c8a37a4fdff2a2bc20c4119b20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:05:00 compute-0 podman[229928]: 2025-12-05 10:05:00.154353585 +0000 UTC m=+0.129085601 container attach 6cae249776ca4bf22f8776ceb9ab4be2bb4213c8a37a4fdff2a2bc20c4119b20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:05:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:00 compute-0 podman[229961]: 2025-12-05 10:05:00.225174255 +0000 UTC m=+0.112101361 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:05:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:00.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:00 compute-0 sudo[230055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmtmmwcbjmertiajcspztthlzsbkjqym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929099.6302147-548-136380944978336/AnsiballZ_systemd.py'
Dec 05 10:05:00 compute-0 sudo[230055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:00 compute-0 friendly_hopper[229965]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:05:00 compute-0 friendly_hopper[229965]: --> All data devices are unavailable
Dec 05 10:05:00 compute-0 systemd[1]: libpod-6cae249776ca4bf22f8776ceb9ab4be2bb4213c8a37a4fdff2a2bc20c4119b20.scope: Deactivated successfully.
Dec 05 10:05:00 compute-0 podman[229928]: 2025-12-05 10:05:00.535759626 +0000 UTC m=+0.510491632 container died 6cae249776ca4bf22f8776ceb9ab4be2bb4213c8a37a4fdff2a2bc20c4119b20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:05:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-01f6839316426511d63eca25ce27a74cf807110c05609cf86a33a4fd18fc467b-merged.mount: Deactivated successfully.
Dec 05 10:05:00 compute-0 podman[229928]: 2025-12-05 10:05:00.603622136 +0000 UTC m=+0.578354172 container remove 6cae249776ca4bf22f8776ceb9ab4be2bb4213c8a37a4fdff2a2bc20c4119b20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:05:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:00 compute-0 systemd[1]: libpod-conmon-6cae249776ca4bf22f8776ceb9ab4be2bb4213c8a37a4fdff2a2bc20c4119b20.scope: Deactivated successfully.
Dec 05 10:05:00 compute-0 sudo[229742]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:00 compute-0 sudo[230073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:05:00 compute-0 sudo[230073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:05:00 compute-0 sudo[230073]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:00 compute-0 python3.9[230057]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 10:05:00 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 05 10:05:00 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 05 10:05:00 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 05 10:05:00 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 05 10:05:00 compute-0 sudo[230098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:05:00 compute-0 sudo[230098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:05:00 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 05 10:05:00 compute-0 sudo[230055]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:01 compute-0 podman[230262]: 2025-12-05 10:05:01.199096583 +0000 UTC m=+0.038906346 container create 2a6fced6602cdfc09c9635b6fa054f4cf338b3c4e0fbcc37188188ed7618ca8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 05 10:05:01 compute-0 systemd[1]: Started libpod-conmon-2a6fced6602cdfc09c9635b6fa054f4cf338b3c4e0fbcc37188188ed7618ca8c.scope.
Dec 05 10:05:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:05:01 compute-0 podman[230262]: 2025-12-05 10:05:01.183474319 +0000 UTC m=+0.023284102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:05:01 compute-0 podman[230262]: 2025-12-05 10:05:01.284707673 +0000 UTC m=+0.124517466 container init 2a6fced6602cdfc09c9635b6fa054f4cf338b3c4e0fbcc37188188ed7618ca8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hugle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 05 10:05:01 compute-0 podman[230262]: 2025-12-05 10:05:01.293374508 +0000 UTC m=+0.133184271 container start 2a6fced6602cdfc09c9635b6fa054f4cf338b3c4e0fbcc37188188ed7618ca8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 10:05:01 compute-0 podman[230262]: 2025-12-05 10:05:01.296812432 +0000 UTC m=+0.136622215 container attach 2a6fced6602cdfc09c9635b6fa054f4cf338b3c4e0fbcc37188188ed7618ca8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hugle, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 10:05:01 compute-0 bold_hugle[230308]: 167 167
Dec 05 10:05:01 compute-0 systemd[1]: libpod-2a6fced6602cdfc09c9635b6fa054f4cf338b3c4e0fbcc37188188ed7618ca8c.scope: Deactivated successfully.
Dec 05 10:05:01 compute-0 podman[230262]: 2025-12-05 10:05:01.300282616 +0000 UTC m=+0.140092379 container died 2a6fced6602cdfc09c9635b6fa054f4cf338b3c4e0fbcc37188188ed7618ca8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hugle, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 10:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ae1cb8025a5440c20a0557be529aea9051197448434600925c59968d11ac3d0-merged.mount: Deactivated successfully.
Dec 05 10:05:01 compute-0 sudo[230340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leahyrjdfewvqfzijsmzyccjsnuzmedn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929101.05554-572-83573696684304/AnsiballZ_file.py'
Dec 05 10:05:01 compute-0 sudo[230340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:01 compute-0 podman[230262]: 2025-12-05 10:05:01.335999385 +0000 UTC m=+0.175809148 container remove 2a6fced6602cdfc09c9635b6fa054f4cf338b3c4e0fbcc37188188ed7618ca8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hugle, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:05:01 compute-0 systemd[1]: libpod-conmon-2a6fced6602cdfc09c9635b6fa054f4cf338b3c4e0fbcc37188188ed7618ca8c.scope: Deactivated successfully.
Dec 05 10:05:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:01.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:01 compute-0 podman[230360]: 2025-12-05 10:05:01.535118854 +0000 UTC m=+0.053504712 container create 95564c2f167484eb949aa938b6117bf895541898671cdffbb666b3591878024d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yalow, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 10:05:01 compute-0 python3.9[230352]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:05:01 compute-0 sudo[230340]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:01 compute-0 systemd[1]: Started libpod-conmon-95564c2f167484eb949aa938b6117bf895541898671cdffbb666b3591878024d.scope.
Dec 05 10:05:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154f05fcc590c3ad8725c2a96d22b94f7be41325776032c4defabe199e45e155/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:01 compute-0 podman[230360]: 2025-12-05 10:05:01.515367428 +0000 UTC m=+0.033753276 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154f05fcc590c3ad8725c2a96d22b94f7be41325776032c4defabe199e45e155/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154f05fcc590c3ad8725c2a96d22b94f7be41325776032c4defabe199e45e155/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154f05fcc590c3ad8725c2a96d22b94f7be41325776032c4defabe199e45e155/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:01 compute-0 ceph-mon[74418]: pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:01 compute-0 podman[230360]: 2025-12-05 10:05:01.63563756 +0000 UTC m=+0.154023408 container init 95564c2f167484eb949aa938b6117bf895541898671cdffbb666b3591878024d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:05:01 compute-0 podman[230360]: 2025-12-05 10:05:01.644957182 +0000 UTC m=+0.163343020 container start 95564c2f167484eb949aa938b6117bf895541898671cdffbb666b3591878024d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yalow, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:05:01 compute-0 podman[230360]: 2025-12-05 10:05:01.650191284 +0000 UTC m=+0.168577172 container attach 95564c2f167484eb949aa938b6117bf895541898671cdffbb666b3591878024d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yalow, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 10:05:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:01 compute-0 friendly_yalow[230377]: {
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:     "1": [
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:         {
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             "devices": [
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "/dev/loop3"
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             ],
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             "lv_name": "ceph_lv0",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             "lv_size": "21470642176",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             "name": "ceph_lv0",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             "tags": {
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.cluster_name": "ceph",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.crush_device_class": "",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.encrypted": "0",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.osd_id": "1",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.type": "block",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.vdo": "0",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:                 "ceph.with_tpm": "0"
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             },
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             "type": "block",
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:             "vg_name": "ceph_vg0"
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:         }
Dec 05 10:05:01 compute-0 friendly_yalow[230377]:     ]
Dec 05 10:05:01 compute-0 friendly_yalow[230377]: }
Dec 05 10:05:01 compute-0 systemd[1]: libpod-95564c2f167484eb949aa938b6117bf895541898671cdffbb666b3591878024d.scope: Deactivated successfully.
Dec 05 10:05:01 compute-0 podman[230360]: 2025-12-05 10:05:01.980277124 +0000 UTC m=+0.498662982 container died 95564c2f167484eb949aa938b6117bf895541898671cdffbb666b3591878024d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yalow, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-154f05fcc590c3ad8725c2a96d22b94f7be41325776032c4defabe199e45e155-merged.mount: Deactivated successfully.
Dec 05 10:05:02 compute-0 podman[230360]: 2025-12-05 10:05:02.026382084 +0000 UTC m=+0.544767942 container remove 95564c2f167484eb949aa938b6117bf895541898671cdffbb666b3591878024d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yalow, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:05:02 compute-0 systemd[1]: libpod-conmon-95564c2f167484eb949aa938b6117bf895541898671cdffbb666b3591878024d.scope: Deactivated successfully.
Dec 05 10:05:02 compute-0 sudo[230098]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:02 compute-0 sudo[230513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:05:02 compute-0 sudo[230513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:05:02 compute-0 sudo[230513]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:02 compute-0 sudo[230576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtzzgaigvhqgsalhfpodneseevawqvxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929101.8951368-599-14336839042602/AnsiballZ_stat.py'
Dec 05 10:05:02 compute-0 sudo[230576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:02 compute-0 sudo[230569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:05:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0001020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:02 compute-0 sudo[230569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:05:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:02.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:02 compute-0 python3.9[230594]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:05:02 compute-0 sudo[230576]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:02 compute-0 podman[230665]: 2025-12-05 10:05:02.580633253 +0000 UTC m=+0.038114135 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:05:02 compute-0 sudo[230804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwhbvlsiyyzimqwdoabqfzinamxiegwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929102.6233017-626-210665010663453/AnsiballZ_stat.py'
Dec 05 10:05:02 compute-0 sudo[230804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:02 compute-0 podman[230665]: 2025-12-05 10:05:02.970804122 +0000 UTC m=+0.428285014 container create efc1bc9b0bd5be177d78110ac2625a218c0a0d87cb94f87192785625b3d24835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:05:03 compute-0 ceph-mon[74418]: pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:03 compute-0 systemd[1]: Started libpod-conmon-efc1bc9b0bd5be177d78110ac2625a218c0a0d87cb94f87192785625b3d24835.scope.
Dec 05 10:05:03 compute-0 python3.9[230806]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:05:03 compute-0 sudo[230804]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:05:03 compute-0 podman[230665]: 2025-12-05 10:05:03.13004441 +0000 UTC m=+0.587525302 container init efc1bc9b0bd5be177d78110ac2625a218c0a0d87cb94f87192785625b3d24835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_brahmagupta, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:05:03 compute-0 podman[230665]: 2025-12-05 10:05:03.1385397 +0000 UTC m=+0.596020572 container start efc1bc9b0bd5be177d78110ac2625a218c0a0d87cb94f87192785625b3d24835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 10:05:03 compute-0 podman[230665]: 2025-12-05 10:05:03.141934052 +0000 UTC m=+0.599414924 container attach efc1bc9b0bd5be177d78110ac2625a218c0a0d87cb94f87192785625b3d24835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_brahmagupta, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 10:05:03 compute-0 vigorous_brahmagupta[230810]: 167 167
Dec 05 10:05:03 compute-0 systemd[1]: libpod-efc1bc9b0bd5be177d78110ac2625a218c0a0d87cb94f87192785625b3d24835.scope: Deactivated successfully.
Dec 05 10:05:03 compute-0 podman[230665]: 2025-12-05 10:05:03.145092487 +0000 UTC m=+0.602573359 container died efc1bc9b0bd5be177d78110ac2625a218c0a0d87cb94f87192785625b3d24835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 10:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8da330480a46bbd11948659634351fff77156b24f004259d0f2a79d92163a98-merged.mount: Deactivated successfully.
Dec 05 10:05:03 compute-0 podman[230665]: 2025-12-05 10:05:03.194416786 +0000 UTC m=+0.651897648 container remove efc1bc9b0bd5be177d78110ac2625a218c0a0d87cb94f87192785625b3d24835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:05:03 compute-0 systemd[1]: libpod-conmon-efc1bc9b0bd5be177d78110ac2625a218c0a0d87cb94f87192785625b3d24835.scope: Deactivated successfully.
Dec 05 10:05:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:03.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:03 compute-0 podman[230860]: 2025-12-05 10:05:03.375497446 +0000 UTC m=+0.056396680 container create f2cdf4c114c18238e3ffb6479a474dac0377bf76f7944bc06c17085371c3171b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lalande, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:05:03 compute-0 systemd[1]: Started libpod-conmon-f2cdf4c114c18238e3ffb6479a474dac0377bf76f7944bc06c17085371c3171b.scope.
Dec 05 10:05:03 compute-0 podman[230860]: 2025-12-05 10:05:03.350526849 +0000 UTC m=+0.031426163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:05:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76903df9b3b2223cb4952ce4bd18829bcd94dc50a05f7dcc902dbebc7758990e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76903df9b3b2223cb4952ce4bd18829bcd94dc50a05f7dcc902dbebc7758990e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76903df9b3b2223cb4952ce4bd18829bcd94dc50a05f7dcc902dbebc7758990e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76903df9b3b2223cb4952ce4bd18829bcd94dc50a05f7dcc902dbebc7758990e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:03 compute-0 podman[230860]: 2025-12-05 10:05:03.475796235 +0000 UTC m=+0.156695479 container init f2cdf4c114c18238e3ffb6479a474dac0377bf76f7944bc06c17085371c3171b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lalande, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 05 10:05:03 compute-0 podman[230860]: 2025-12-05 10:05:03.487830892 +0000 UTC m=+0.168730126 container start f2cdf4c114c18238e3ffb6479a474dac0377bf76f7944bc06c17085371c3171b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lalande, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:05:03 compute-0 podman[230860]: 2025-12-05 10:05:03.491014878 +0000 UTC m=+0.171914142 container attach f2cdf4c114c18238e3ffb6479a474dac0377bf76f7944bc06c17085371c3171b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:05:03 compute-0 sudo[231006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hewdkyzqyifxswpgsqwqkjqozahfzqcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929103.362326-650-46000448291582/AnsiballZ_stat.py'
Dec 05 10:05:03 compute-0 sudo[231006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc001340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:03 compute-0 python3.9[231008]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:05:03 compute-0 sudo[231006]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:04 compute-0 lvm[231148]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:05:04 compute-0 lvm[231148]: VG ceph_vg0 finished
Dec 05 10:05:04 compute-0 confident_lalande[230928]: {}
Dec 05 10:05:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:04 compute-0 systemd[1]: libpod-f2cdf4c114c18238e3ffb6479a474dac0377bf76f7944bc06c17085371c3171b.scope: Deactivated successfully.
Dec 05 10:05:04 compute-0 systemd[1]: libpod-f2cdf4c114c18238e3ffb6479a474dac0377bf76f7944bc06c17085371c3171b.scope: Consumed 1.148s CPU time.
Dec 05 10:05:04 compute-0 podman[230860]: 2025-12-05 10:05:04.197322839 +0000 UTC m=+0.878222093 container died f2cdf4c114c18238e3ffb6479a474dac0377bf76f7944bc06c17085371c3171b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lalande, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 05 10:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-76903df9b3b2223cb4952ce4bd18829bcd94dc50a05f7dcc902dbebc7758990e-merged.mount: Deactivated successfully.
Dec 05 10:05:04 compute-0 podman[230860]: 2025-12-05 10:05:04.235402992 +0000 UTC m=+0.916302226 container remove f2cdf4c114c18238e3ffb6479a474dac0377bf76f7944bc06c17085371c3171b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lalande, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:05:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:04.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:04 compute-0 systemd[1]: libpod-conmon-f2cdf4c114c18238e3ffb6479a474dac0377bf76f7944bc06c17085371c3171b.scope: Deactivated successfully.
Dec 05 10:05:04 compute-0 sudo[230569]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:05:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:05:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:05:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:05:04 compute-0 sudo[231189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:05:04 compute-0 sudo[231189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:05:04 compute-0 sudo[231189]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:04 compute-0 sudo[231241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlqdozotmmldxlhrdoznrjdmxlzbtjbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929103.362326-650-46000448291582/AnsiballZ_copy.py'
Dec 05 10:05:04 compute-0 sudo[231241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:05:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:04 compute-0 python3.9[231243]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764929103.362326-650-46000448291582/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:04 compute-0 sudo[231241]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:05 compute-0 sudo[231393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yooswqsvpripigqygeiyjkuixjghtbcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929104.943342-695-28067219527875/AnsiballZ_command.py'
Dec 05 10:05:05 compute-0 sudo[231393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:05:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:05:05 compute-0 ceph-mon[74418]: pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:05:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:05.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:05 compute-0 python3.9[231395]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:05:05 compute-0 sudo[231393]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:05] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 10:05:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:05] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 10:05:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0001b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:06 compute-0 sudo[231546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhvmujjrodnmrnhgkqqljutbzpdmzyqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929105.7458646-719-265446032672837/AnsiballZ_lineinfile.py'
Dec 05 10:05:06 compute-0 sudo[231546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:06 compute-0 python3.9[231548]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc001340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:06 compute-0 sudo[231546]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:05:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:06.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:05:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:06 compute-0 sudo[231700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqqqtbemvcemwkngmaiscekpxwxtjykt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929106.4348764-743-44367250162512/AnsiballZ_replace.py'
Dec 05 10:05:06 compute-0 sudo[231700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:07 compute-0 python3.9[231702]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:05:07.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:05:07 compute-0 sudo[231700]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:07.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:07 compute-0 sudo[231852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwqtzwjqklsmilxtulfojsunuriiufmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929107.2281563-767-178464792512531/AnsiballZ_replace.py'
Dec 05 10:05:07 compute-0 sudo[231852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:07 compute-0 python3.9[231854]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:07 compute-0 sudo[231852]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:07 compute-0 ceph-mon[74418]: pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0001b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:08.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:08 compute-0 sudo[232006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flcgjlnjohmcvwcugcrjdsrxuzcsiwlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929108.0782034-794-237383069297866/AnsiballZ_lineinfile.py'
Dec 05 10:05:08 compute-0 sudo[232006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:08 compute-0 python3.9[232008]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:08 compute-0 sudo[232006]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc001340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:08 compute-0 ceph-mon[74418]: pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:08 compute-0 sudo[232158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xomisqmspnpkdclarswwvzghhytkdhnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929108.7051167-794-67243685570136/AnsiballZ_lineinfile.py'
Dec 05 10:05:08 compute-0 sudo[232158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:09 compute-0 python3.9[232160]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:09 compute-0 sudo[232158]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:09 compute-0 sudo[232173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:05:09 compute-0 sudo[232173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:05:09 compute-0 sudo[232173]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:09.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:09 compute-0 sudo[232335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buuadkdrsodcjikroxvlbqkdafcampmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929109.3179014-794-128925114757141/AnsiballZ_lineinfile.py'
Dec 05 10:05:09 compute-0 sudo[232335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:09 compute-0 python3.9[232337]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:09 compute-0 sudo[232335]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0001b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:10.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:10 compute-0 sudo[232488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnmrlsihqokcthynvulllunpejukwwei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929110.0135462-794-225785499925860/AnsiballZ_lineinfile.py'
Dec 05 10:05:10 compute-0 sudo[232488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:10 compute-0 python3.9[232490]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:10 compute-0 sudo[232488]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:10 compute-0 sudo[232641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeaswntnvzxazrwrwrforwxpudftajgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929110.6740775-881-215459431151603/AnsiballZ_stat.py'
Dec 05 10:05:10 compute-0 sudo[232641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:11 compute-0 python3.9[232643]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:05:11 compute-0 sudo[232641]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:11.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc001340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:11 compute-0 ceph-mon[74418]: pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:11 compute-0 sudo[232795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whfhifxuanpsegblgtpcrepgsuveejst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929111.4841344-905-17665943273983/AnsiballZ_file.py'
Dec 05 10:05:11 compute-0 sudo[232795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:11 compute-0 python3.9[232797]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:11 compute-0 sudo[232795]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:12.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:05:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:05:12 compute-0 sudo[232949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxqlipvhjonhjswrlmetrgdmoxqhhpsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929112.3057573-932-274675155408819/AnsiballZ_file.py'
Dec 05 10:05:12 compute-0 sudo[232949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0001b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:12 compute-0 ceph-mon[74418]: pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:05:12 compute-0 python3.9[232951]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:05:12 compute-0 sudo[232949]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:13 compute-0 sudo[233101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkkzspfdtmdoxgwiefoolifwzovvnzoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929113.0637267-956-241344677469865/AnsiballZ_stat.py'
Dec 05 10:05:13 compute-0 sudo[233101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:13.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:13 compute-0 python3.9[233103]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:05:13 compute-0 sudo[233101]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:13 compute-0 sudo[233179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcfmrcjkfaqiouooahmnymzmfxjgugwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929113.0637267-956-241344677469865/AnsiballZ_file.py'
Dec 05 10:05:13 compute-0 sudo[233179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:13 compute-0 python3.9[233181]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:05:13 compute-0 sudo[233179]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0091b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:14.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:14 compute-0 sudo[233333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giwuayfvfvpypkjboerlzjrufbskvief ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929114.0922768-956-214529957044484/AnsiballZ_stat.py'
Dec 05 10:05:14 compute-0 sudo[233333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:14 compute-0 python3.9[233335]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:05:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:05:14 compute-0 sudo[233333]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:14 compute-0 sudo[233411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptqxtgijypxktsvnhnnvdcffuheacjpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929114.0922768-956-214529957044484/AnsiballZ_file.py'
Dec 05 10:05:14 compute-0 sudo[233411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:14 compute-0 python3.9[233413]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:05:15 compute-0 sudo[233411]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:15.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:15 compute-0 sudo[233563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glwmscegzmmlucwwfjrbmhnrnxkddmcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929115.3458033-1025-41716873048300/AnsiballZ_file.py'
Dec 05 10:05:15 compute-0 sudo[233563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:15] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:05:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:15] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:05:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00033c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:15 compute-0 ceph-mon[74418]: pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:05:15 compute-0 python3.9[233565]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:15 compute-0 sudo[233563]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:16.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:16 compute-0 sudo[233716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwlrdmsmgqfzufplqhuyyholkwwwlncs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929116.0351427-1049-237221707628963/AnsiballZ_stat.py'
Dec 05 10:05:16 compute-0 sudo[233716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:16 compute-0 python3.9[233719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:05:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:16 compute-0 sudo[233716]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0091b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:16 compute-0 ceph-mon[74418]: pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:16 compute-0 sudo[233795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyymvfupfohffoexyjaqgpvmqzqembly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929116.0351427-1049-237221707628963/AnsiballZ_file.py'
Dec 05 10:05:16 compute-0 sudo[233795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:05:17.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:05:17 compute-0 python3.9[233797]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:17 compute-0 sudo[233795]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:17.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:17 compute-0 sudo[233947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fodspfblhsdczzafrofwgmzvpqzbaxek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929117.2833374-1085-205463575838149/AnsiballZ_stat.py'
Dec 05 10:05:17 compute-0 sudo[233947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:17 compute-0 python3.9[233949]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:05:17 compute-0 sudo[233947]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:17 compute-0 sudo[234025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwjhypgwqukmfcwcvizsolmjpqoqvnzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929117.2833374-1085-205463575838149/AnsiballZ_file.py'
Dec 05 10:05:17 compute-0 sudo[234025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:18 compute-0 python3.9[234027]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:18 compute-0 sudo[234025]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00033c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:18.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:18 compute-0 sudo[234179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymellbrjoktvnoztauxbtcajkobyzxzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929118.4170434-1121-190076144360340/AnsiballZ_systemd.py'
Dec 05 10:05:18 compute-0 sudo[234179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:18 compute-0 python3.9[234181]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:05:18 compute-0 systemd[1]: Reloading.
Dec 05 10:05:19 compute-0 systemd-rc-local-generator[234209]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:05:19 compute-0 systemd-sysv-generator[234213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:05:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:19.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:19 compute-0 sudo[234179]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:19 compute-0 ceph-mon[74418]: pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:19 compute-0 sudo[234368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxtyzzcmagoazurqvkbhlnebtlqikzjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929119.5991106-1145-59524688693725/AnsiballZ_stat.py'
Dec 05 10:05:19 compute-0 sudo[234368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:20 compute-0 python3.9[234370]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:05:20 compute-0 sudo[234368]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:20.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:20 compute-0 sudo[234447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnzqdoyqbvxsjyecqmlyrysvejsetybl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929119.5991106-1145-59524688693725/AnsiballZ_file.py'
Dec 05 10:05:20 compute-0 sudo[234447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:20 compute-0 python3.9[234450]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:20 compute-0 sudo[234447]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:05:20.560 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:05:20.561 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:05:20.562 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:20 compute-0 ceph-mon[74418]: pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:21 compute-0 sudo[234600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofvmjltlgccyjvufxucxymrlrycmzevi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929120.8243454-1181-266788613064336/AnsiballZ_stat.py'
Dec 05 10:05:21 compute-0 sudo[234600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:21 compute-0 python3.9[234602]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:05:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:21.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:21 compute-0 sudo[234600]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:21 compute-0 sudo[234678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uscjjrexmlrlqbppgbxxulpyxtyrlhzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929120.8243454-1181-266788613064336/AnsiballZ_file.py'
Dec 05 10:05:21 compute-0 sudo[234678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:21 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:21 compute-0 python3.9[234680]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:21 compute-0 sudo[234678]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:22.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:22 compute-0 sudo[234832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoyulszxwftgfpllwsmnhpobmjxroezv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929122.1121664-1217-100444840430241/AnsiballZ_systemd.py'
Dec 05 10:05:22 compute-0 sudo[234832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:22 compute-0 python3.9[234834]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:05:22 compute-0 systemd[1]: Reloading.
Dec 05 10:05:22 compute-0 systemd-rc-local-generator[234860]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:05:22 compute-0 systemd-sysv-generator[234864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:05:23 compute-0 systemd[1]: Starting Create netns directory...
Dec 05 10:05:23 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 10:05:23 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 10:05:23 compute-0 systemd[1]: Finished Create netns directory.
Dec 05 10:05:23 compute-0 sudo[234832]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:23.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:23 compute-0 ceph-mon[74418]: pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:24 compute-0 sudo[235025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xruusrhhfjgmrlggtszdrwoukbpetsgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929123.7136807-1247-103666740448267/AnsiballZ_file.py'
Dec 05 10:05:24 compute-0 sudo[235025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:24.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:24 compute-0 python3.9[235027]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:05:24 compute-0 sudo[235025]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:05:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:24 compute-0 sudo[235193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziefcmuotmnadljglwoaudoocfniceif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929124.4720438-1271-172590970273758/AnsiballZ_stat.py'
Dec 05 10:05:24 compute-0 sudo[235193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:24 compute-0 podman[235153]: 2025-12-05 10:05:24.796157151 +0000 UTC m=+0.079601689 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:05:24 compute-0 python3.9[235201]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:05:24 compute-0 sudo[235193]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:24 compute-0 ceph-mon[74418]: pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:05:25 compute-0 sudo[235323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udrtcnysxihpnewdfnsyogllvvycbhok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929124.4720438-1271-172590970273758/AnsiballZ_copy.py'
Dec 05 10:05:25 compute-0 sudo[235323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:25.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:25 compute-0 python3.9[235325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764929124.4720438-1271-172590970273758/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:05:25 compute-0 sudo[235323]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:25] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:05:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:25] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:05:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:26.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:26 compute-0 sudo[235476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yldfcbowltiesoacycbxcurjfuwqmfsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929126.052463-1322-193577570946026/AnsiballZ_file.py'
Dec 05 10:05:26 compute-0 sudo[235476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:26 compute-0 python3.9[235478]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:05:26 compute-0 sudo[235476]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:27 compute-0 sudo[235629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaylwjjyovebyrtlyhmfvidjcrsngpxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929126.7726765-1346-192293476763369/AnsiballZ_stat.py'
Dec 05 10:05:27 compute-0 sudo[235629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:05:27.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:05:27 compute-0 python3.9[235631]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:05:27 compute-0 sudo[235629]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:05:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:27.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:05:27
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.nfs', 'volumes', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:05:27 compute-0 sudo[235752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysjppazomptcxecutbdzhbtkqjodpfnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929126.7726765-1346-192293476763369/AnsiballZ_copy.py'
Dec 05 10:05:27 compute-0 sudo[235752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:05:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:05:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:27 compute-0 python3.9[235754]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764929126.7726765-1346-192293476763369/.source.json _original_basename=.9syrawct follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:27 compute-0 ceph-mon[74418]: pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:05:27 compute-0 sudo[235752]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:05:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:05:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:28.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:28 compute-0 sudo[235905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djgnqkzwukbdiwfovmdpjytiikpewwgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929128.072479-1391-43959595610126/AnsiballZ_file.py'
Dec 05 10:05:28 compute-0 python3.9[235907]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:28 compute-0 sudo[235905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:28 compute-0 sudo[235905]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:29 compute-0 sudo[236058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etwmgbtbyudelzbjeuqchirwpzwoqpuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929128.8127048-1415-127426695911036/AnsiballZ_stat.py'
Dec 05 10:05:29 compute-0 sudo[236058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:29 compute-0 sudo[236058]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:29 compute-0 sudo[236064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:05:29 compute-0 sudo[236064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:05:29 compute-0 sudo[236064]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:29.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:29 compute-0 sudo[236206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbfswzzloagvcqbahsfmgwzrisfeniqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929128.8127048-1415-127426695911036/AnsiballZ_copy.py'
Dec 05 10:05:29 compute-0 sudo[236206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:29 compute-0 ceph-mon[74418]: pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:29 compute-0 sudo[236206]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100530 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:05:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:30.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:30 compute-0 podman[236234]: 2025-12-05 10:05:30.432451149 +0000 UTC m=+0.096814426 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 10:05:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:30 compute-0 sudo[236388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csbwlqdpukxgwazgwjxvfaapyoqckcnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929130.5394535-1466-52616589532428/AnsiballZ_container_config_data.py'
Dec 05 10:05:30 compute-0 sudo[236388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:31 compute-0 python3.9[236390]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 05 10:05:31 compute-0 sudo[236388]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:31.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:31 compute-0 ceph-mon[74418]: pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:31 compute-0 sudo[236542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiqlqubupecpfhzmdydfmhmcwoifkgzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929131.4674387-1493-173832409316902/AnsiballZ_container_config_hash.py'
Dec 05 10:05:31 compute-0 sudo[236542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:32 compute-0 python3.9[236544]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 10:05:32 compute-0 sudo[236542]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:32.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100532 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:05:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:32 compute-0 sudo[236696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hottidrjlevhsaqfrjfdxiaioibdpuuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929132.4220831-1520-257447443453270/AnsiballZ_podman_container_info.py'
Dec 05 10:05:32 compute-0 sudo[236696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:33 compute-0 python3.9[236698]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 10:05:33 compute-0 sudo[236696]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:33.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:34.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:34 compute-0 ceph-mon[74418]: pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:05:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:05:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:35 compute-0 ceph-mon[74418]: pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:05:35 compute-0 sudo[236876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdpxeyjxgpysfltoozgmjifuheekkzkd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764929134.878029-1559-133514758358654/AnsiballZ_edpm_container_manage.py'
Dec 05 10:05:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:35.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:35 compute-0 sudo[236876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:35] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:05:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:35] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:05:35 compute-0 python3[236878]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 10:05:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:36.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 05 10:05:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:05:37.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:05:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:37.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:05:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Cumulative writes: 9103 writes, 34K keys, 9103 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9103 writes, 2173 syncs, 4.19 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 733 writes, 1103 keys, 733 commit groups, 1.0 writes per commit group, ingest: 0.36 MB, 0.00 MB/s
                                           Interval WAL: 733 writes, 364 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.8 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 05 10:05:37 compute-0 ceph-mon[74418]: pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 05 10:05:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:37 compute-0 podman[236892]: 2025-12-05 10:05:37.710307074 +0000 UTC m=+1.947310601 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec 05 10:05:37 compute-0 podman[236954]: 2025-12-05 10:05:37.847295379 +0000 UTC m=+0.045545966 container create a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:05:37 compute-0 podman[236954]: 2025-12-05 10:05:37.825148098 +0000 UTC m=+0.023398685 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec 05 10:05:37 compute-0 python3[236878]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec 05 10:05:38 compute-0 sudo[236876]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:38.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 05 10:05:38 compute-0 sudo[237144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfhlvecjbhddvytvqyqlgtuavekldxie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929138.3291507-1583-73389733970092/AnsiballZ_stat.py'
Dec 05 10:05:38 compute-0 sudo[237144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:38 compute-0 python3.9[237146]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:05:38 compute-0 sudo[237144]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:39 compute-0 sudo[237298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogptwzhexfofivocwqbdwpcoapswbjec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929139.149613-1610-202851936030898/AnsiballZ_file.py'
Dec 05 10:05:39 compute-0 sudo[237298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:39.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:39 compute-0 python3.9[237300]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:39 compute-0 sudo[237298]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:39 compute-0 ceph-mon[74418]: pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 05 10:05:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:39 compute-0 sudo[237374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjsxukuhlvgixakvxevwcudsyztrtpeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929139.149613-1610-202851936030898/AnsiballZ_stat.py'
Dec 05 10:05:39 compute-0 sudo[237374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:05:40 compute-0 python3.9[237376]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:05:40 compute-0 sudo[237374]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:05:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:40.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:05:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:05:40 compute-0 sudo[237527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eskbgkxbpkdfdlbnxfumpwexmngyutua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929140.1131227-1610-27641083545644/AnsiballZ_copy.py'
Dec 05 10:05:40 compute-0 sudo[237527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:40 compute-0 python3.9[237529]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764929140.1131227-1610-27641083545644/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:40 compute-0 sudo[237527]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:41 compute-0 sudo[237603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opwkglouihljmbvrkpfodrdnwejosncn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929140.1131227-1610-27641083545644/AnsiballZ_systemd.py'
Dec 05 10:05:41 compute-0 sudo[237603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:41 compute-0 python3.9[237605]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 10:05:41 compute-0 systemd[1]: Reloading.
Dec 05 10:05:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:41.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:41 compute-0 systemd-rc-local-generator[237630]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:05:41 compute-0 systemd-sysv-generator[237634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:05:41 compute-0 ceph-mon[74418]: pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:05:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:41 compute-0 sudo[237603]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:42 compute-0 sudo[237714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjuamsvuzsokrvsuzlezzilmfqofakcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929140.1131227-1610-27641083545644/AnsiballZ_systemd.py'
Dec 05 10:05:42 compute-0 sudo[237714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:42.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:42 compute-0 python3.9[237716]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:05:42 compute-0 systemd[1]: Reloading.
Dec 05 10:05:42 compute-0 systemd-rc-local-generator[237748]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:05:42 compute-0 systemd-sysv-generator[237752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:05:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:05:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:05:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:05:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:05:42 compute-0 systemd[1]: Starting multipathd container...
Dec 05 10:05:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:05:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:05:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8278b8c203634c823eee4ab6476dc741e28ee2fc3ad8315ad972e4af9f11fc8c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8278b8c203634c823eee4ab6476dc741e28ee2fc3ad8315ad972e4af9f11fc8c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:42 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8.
Dec 05 10:05:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:05:42 compute-0 podman[237758]: 2025-12-05 10:05:42.93713159 +0000 UTC m=+0.133417539 container init a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 10:05:42 compute-0 multipathd[237773]: + sudo -E kolla_set_configs
Dec 05 10:05:42 compute-0 podman[237758]: 2025-12-05 10:05:42.967806222 +0000 UTC m=+0.164092161 container start a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd)
Dec 05 10:05:42 compute-0 podman[237758]: multipathd
Dec 05 10:05:42 compute-0 sudo[237779]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 10:05:42 compute-0 systemd[1]: Started multipathd container.
Dec 05 10:05:42 compute-0 sudo[237779]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 10:05:42 compute-0 sudo[237779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 10:05:43 compute-0 sudo[237714]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:43 compute-0 multipathd[237773]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 10:05:43 compute-0 multipathd[237773]: INFO:__main__:Validating config file
Dec 05 10:05:43 compute-0 multipathd[237773]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 10:05:43 compute-0 multipathd[237773]: INFO:__main__:Writing out command to execute
Dec 05 10:05:43 compute-0 sudo[237779]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:43 compute-0 multipathd[237773]: ++ cat /run_command
Dec 05 10:05:43 compute-0 multipathd[237773]: + CMD='/usr/sbin/multipathd -d'
Dec 05 10:05:43 compute-0 multipathd[237773]: + ARGS=
Dec 05 10:05:43 compute-0 multipathd[237773]: + sudo kolla_copy_cacerts
Dec 05 10:05:43 compute-0 sudo[237803]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 10:05:43 compute-0 sudo[237803]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 10:05:43 compute-0 podman[237780]: 2025-12-05 10:05:43.059531689 +0000 UTC m=+0.074842641 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:05:43 compute-0 sudo[237803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 10:05:43 compute-0 sudo[237803]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:43 compute-0 systemd[1]: a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8-2db474af5b5c1371.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 10:05:43 compute-0 systemd[1]: a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8-2db474af5b5c1371.service: Failed with result 'exit-code'.
Dec 05 10:05:43 compute-0 multipathd[237773]: + [[ ! -n '' ]]
Dec 05 10:05:43 compute-0 multipathd[237773]: + . kolla_extend_start
Dec 05 10:05:43 compute-0 multipathd[237773]: Running command: '/usr/sbin/multipathd -d'
Dec 05 10:05:43 compute-0 multipathd[237773]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 05 10:05:43 compute-0 multipathd[237773]: + umask 0022
Dec 05 10:05:43 compute-0 multipathd[237773]: + exec /usr/sbin/multipathd -d
Dec 05 10:05:43 compute-0 multipathd[237773]: 3913.162773 | --------start up--------
Dec 05 10:05:43 compute-0 multipathd[237773]: 3913.162790 | read /etc/multipath.conf
Dec 05 10:05:43 compute-0 multipathd[237773]: 3913.168805 | path checkers start up
Dec 05 10:05:43 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 05 10:05:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:43.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:43 compute-0 ceph-mon[74418]: pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:05:44 compute-0 python3.9[237961]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:05:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:44.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:44 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 05 10:05:44 compute-0 sudo[238116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urakgvwqiyaajbrijzstkzoeczdoehbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929144.2537124-1718-261898725417260/AnsiballZ_command.py'
Dec 05 10:05:44 compute-0 sudo[238116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:05:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:44 compute-0 python3.9[238118]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:05:44 compute-0 sudo[238116]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:05:45 compute-0 sudo[238281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncvjxeqhgbstptuvlyexvwnhsmrtogbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929145.040874-1742-47282435253035/AnsiballZ_systemd.py'
Dec 05 10:05:45 compute-0 sudo[238281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:45.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:45 compute-0 python3.9[238283]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 10:05:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:45] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:05:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:45] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:05:45 compute-0 systemd[1]: Stopping multipathd container...
Dec 05 10:05:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:45 compute-0 multipathd[237773]: 3915.813590 | exit (signal)
Dec 05 10:05:45 compute-0 multipathd[237773]: 3915.813662 | --------shut down-------
Dec 05 10:05:45 compute-0 systemd[1]: libpod-a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8.scope: Deactivated successfully.
Dec 05 10:05:45 compute-0 ceph-mon[74418]: pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:05:45 compute-0 podman[238287]: 2025-12-05 10:05:45.762838668 +0000 UTC m=+0.070853403 container died a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible)
Dec 05 10:05:45 compute-0 systemd[1]: a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8-2db474af5b5c1371.timer: Deactivated successfully.
Dec 05 10:05:45 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8.
Dec 05 10:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8-userdata-shm.mount: Deactivated successfully.
Dec 05 10:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8278b8c203634c823eee4ab6476dc741e28ee2fc3ad8315ad972e4af9f11fc8c-merged.mount: Deactivated successfully.
Dec 05 10:05:46 compute-0 podman[238287]: 2025-12-05 10:05:46.163380878 +0000 UTC m=+0.471395623 container cleanup a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:05:46 compute-0 podman[238287]: multipathd
Dec 05 10:05:46 compute-0 podman[238318]: multipathd
Dec 05 10:05:46 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 05 10:05:46 compute-0 systemd[1]: Stopped multipathd container.
Dec 05 10:05:46 compute-0 systemd[1]: Starting multipathd container...
Dec 05 10:05:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8278b8c203634c823eee4ab6476dc741e28ee2fc3ad8315ad972e4af9f11fc8c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8278b8c203634c823eee4ab6476dc741e28ee2fc3ad8315ad972e4af9f11fc8c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 10:05:46 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8.
Dec 05 10:05:46 compute-0 podman[238331]: 2025-12-05 10:05:46.339037411 +0000 UTC m=+0.096220720 container init a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:05:46 compute-0 multipathd[238346]: + sudo -E kolla_set_configs
Dec 05 10:05:46 compute-0 podman[238331]: 2025-12-05 10:05:46.365694184 +0000 UTC m=+0.122877443 container start a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 05 10:05:46 compute-0 sudo[238353]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 10:05:46 compute-0 sudo[238353]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 10:05:46 compute-0 sudo[238353]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 10:05:46 compute-0 podman[238331]: multipathd
Dec 05 10:05:46 compute-0 systemd[1]: Started multipathd container.
Dec 05 10:05:46 compute-0 sudo[238281]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:46 compute-0 multipathd[238346]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 10:05:46 compute-0 multipathd[238346]: INFO:__main__:Validating config file
Dec 05 10:05:46 compute-0 multipathd[238346]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 10:05:46 compute-0 multipathd[238346]: INFO:__main__:Writing out command to execute
Dec 05 10:05:46 compute-0 sudo[238353]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:46 compute-0 multipathd[238346]: ++ cat /run_command
Dec 05 10:05:46 compute-0 multipathd[238346]: + CMD='/usr/sbin/multipathd -d'
Dec 05 10:05:46 compute-0 multipathd[238346]: + ARGS=
Dec 05 10:05:46 compute-0 multipathd[238346]: + sudo kolla_copy_cacerts
Dec 05 10:05:46 compute-0 sudo[238382]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 10:05:46 compute-0 sudo[238382]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 10:05:46 compute-0 sudo[238382]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 10:05:46 compute-0 sudo[238382]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:46 compute-0 multipathd[238346]: + [[ ! -n '' ]]
Dec 05 10:05:46 compute-0 multipathd[238346]: + . kolla_extend_start
Dec 05 10:05:46 compute-0 multipathd[238346]: Running command: '/usr/sbin/multipathd -d'
Dec 05 10:05:46 compute-0 multipathd[238346]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 05 10:05:46 compute-0 multipathd[238346]: + umask 0022
Dec 05 10:05:46 compute-0 multipathd[238346]: + exec /usr/sbin/multipathd -d
Dec 05 10:05:46 compute-0 podman[238354]: 2025-12-05 10:05:46.455588121 +0000 UTC m=+0.081208083 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:05:46 compute-0 systemd[1]: a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8-26d18b13670ba4fa.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 10:05:46 compute-0 systemd[1]: a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8-26d18b13670ba4fa.service: Failed with result 'exit-code'.
Dec 05 10:05:46 compute-0 multipathd[238346]: 3916.545330 | --------start up--------
Dec 05 10:05:46 compute-0 multipathd[238346]: 3916.545342 | read /etc/multipath.conf
Dec 05 10:05:46 compute-0 multipathd[238346]: 3916.550995 | path checkers start up
Dec 05 10:05:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:05:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:05:47.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:05:47 compute-0 sudo[238535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jctoubbjwydrhsrlllqygmqcgmevawyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929146.702854-1766-105482854937126/AnsiballZ_file.py'
Dec 05 10:05:47 compute-0 sudo[238535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:47 compute-0 python3.9[238537]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:47 compute-0 sudo[238535]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:47.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:47 compute-0 ceph-mon[74418]: pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:05:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:05:48 compute-0 sudo[238690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdepvgchprwlmiezrrwrqlinnmomrfsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929147.9147162-1802-15618027034903/AnsiballZ_file.py'
Dec 05 10:05:48 compute-0 sudo[238690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:48.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:48 compute-0 python3.9[238692]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 10:05:48 compute-0 sudo[238690]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:05:48 compute-0 sshd-session[238642]: Invalid user admin from 139.19.117.129 port 59202
Dec 05 10:05:48 compute-0 sshd-session[238642]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Dec 05 10:05:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:48 compute-0 sudo[238843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebdcejbhealbnaiwxgrthgsnstirwips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929148.641758-1826-147326654733681/AnsiballZ_modprobe.py'
Dec 05 10:05:48 compute-0 sudo[238843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:49 compute-0 python3.9[238845]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 05 10:05:49 compute-0 kernel: Key type psk registered
Dec 05 10:05:49 compute-0 sudo[238843]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:49.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:49 compute-0 sudo[238902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:05:49 compute-0 sudo[238902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:05:49 compute-0 sudo[238902]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:49 compute-0 sudo[239030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fihzjdpykuzwpaqtoiaiiawavrilrwzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929149.406888-1850-536056186396/AnsiballZ_stat.py'
Dec 05 10:05:49 compute-0 sudo[239030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:49 compute-0 ceph-mon[74418]: pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 05 10:05:49 compute-0 python3.9[239032]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:05:49 compute-0 sudo[239030]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:50 compute-0 sudo[239154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvvrkqcajoqhizfgsbfzvhkyofmuiwfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929149.406888-1850-536056186396/AnsiballZ_copy.py'
Dec 05 10:05:50 compute-0 sudo[239154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:50.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:50 compute-0 python3.9[239156]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764929149.406888-1850-536056186396/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:50 compute-0 sudo[239154]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 05 10:05:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:51 compute-0 sudo[239307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnnvxnodckigfwrxyhjdqfyzdfzoxghu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929150.737385-1898-140836265739968/AnsiballZ_lineinfile.py'
Dec 05 10:05:51 compute-0 sudo[239307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:51 compute-0 ceph-mon[74418]: pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 05 10:05:51 compute-0 python3.9[239309]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:05:51 compute-0 sudo[239307]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:51.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:51 compute-0 sudo[239459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rztadgfseggcxrpgjflkqbhvrtczerwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929151.5654042-1922-38106570508782/AnsiballZ_systemd.py'
Dec 05 10:05:51 compute-0 sudo[239459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100552 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:05:52 compute-0 python3.9[239461]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 10:05:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:52 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 05 10:05:52 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 05 10:05:52 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 05 10:05:52 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 05 10:05:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:52.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:52 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 05 10:05:52 compute-0 sudo[239459]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:05:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:52 compute-0 sudo[239617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akqtbhhzcfzdzaubhiekyfzzraabsbiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929152.6412225-1946-240681038635677/AnsiballZ_dnf.py'
Dec 05 10:05:52 compute-0 sudo[239617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:53 compute-0 python3.9[239619]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 10:05:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:53.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:53 compute-0 ceph-mgr[74711]: [dashboard INFO request] [192.168.122.100:56594] [POST] [200] [0.005s] [4.0B] [c5e9447b-95e9-4ede-9210-49d289a099cd] /api/prometheus_receiver
Dec 05 10:05:53 compute-0 ceph-mon[74418]: pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:05:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:54.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100554 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:05:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 05 10:05:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:55 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 05 10:05:55 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 10:05:55 compute-0 podman[239627]: 2025-12-05 10:05:55.322303829 +0000 UTC m=+0.089940969 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 10:05:55 compute-0 systemd[1]: Reloading.
Dec 05 10:05:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:55.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:55 compute-0 systemd-rc-local-generator[239674]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:05:55 compute-0 systemd-sysv-generator[239679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:05:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:55] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:05:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:05:55] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec 05 10:05:55 compute-0 ceph-mon[74418]: pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 05 10:05:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:55 compute-0 systemd[1]: Reloading.
Dec 05 10:05:55 compute-0 systemd-rc-local-generator[239710]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:05:55 compute-0 systemd-sysv-generator[239713]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:05:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:56 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 05 10:05:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:56.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:56 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 05 10:05:56 compute-0 lvm[239756]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:05:56 compute-0 lvm[239756]: VG ceph_vg0 finished
Dec 05 10:05:56 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 10:05:56 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 10:05:56 compute-0 systemd[1]: Reloading.
Dec 05 10:05:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:05:56 compute-0 systemd-rc-local-generator[239810]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:05:56 compute-0 systemd-sysv-generator[239813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:05:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:56 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 10:05:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:05:57.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:05:57 compute-0 ceph-mon[74418]: pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:05:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:05:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:57.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:05:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:05:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:05:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:05:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:05:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:05:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:05:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:05:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:05:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:57 compute-0 sudo[239617]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:58 compute-0 sshd-session[238642]: Connection closed by invalid user admin 139.19.117.129 port 59202 [preauth]
Dec 05 10:05:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:05:58.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:05:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:05:58 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 10:05:58 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 10:05:58 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.532s CPU time.
Dec 05 10:05:58 compute-0 systemd[1]: run-r267857acac3044338850dbd96386f086.service: Deactivated successfully.
Dec 05 10:05:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:58 compute-0 sudo[241099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvmekewkyecuvrlqqjvhyauoyqtctuex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929158.514264-1970-9878570196662/AnsiballZ_systemd_service.py'
Dec 05 10:05:58 compute-0 sudo[241099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:05:59 compute-0 python3.9[241101]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 10:05:59 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 05 10:05:59 compute-0 iscsid[228500]: iscsid shutting down.
Dec 05 10:05:59 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 05 10:05:59 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 05 10:05:59 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 05 10:05:59 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 05 10:05:59 compute-0 systemd[1]: Started Open-iSCSI.
Dec 05 10:05:59 compute-0 sudo[241099]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:05:59 compute-0 ceph-mon[74418]: pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:05:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:05:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:05:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:05:59.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:05:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:05:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:05:59 compute-0 python3.9[241255]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 10:06:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:00.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:06:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:00 compute-0 sudo[241419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogxdgahtsviystxcuwbpbopkddtxknyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929160.6005201-2022-273893118502001/AnsiballZ_file.py'
Dec 05 10:06:00 compute-0 sudo[241419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:00 compute-0 podman[241385]: 2025-12-05 10:06:00.94880294 +0000 UTC m=+0.111512154 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 10:06:01 compute-0 python3.9[241426]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:01 compute-0 sudo[241419]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:06:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:01.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:06:01 compute-0 ceph-mon[74418]: pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:06:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:01 compute-0 sudo[241590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhrzhrbjruxriinvgxiunkqsvygwbdfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929161.666417-2055-199084466721972/AnsiballZ_systemd_service.py'
Dec 05 10:06:01 compute-0 sudo[241590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:02 compute-0 python3.9[241592]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 10:06:02 compute-0 systemd[1]: Reloading.
Dec 05 10:06:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40041d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:02 compute-0 systemd-rc-local-generator[241616]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:06:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:02.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:02 compute-0 systemd-sysv-generator[241622]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:06:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:06:02 compute-0 sudo[241590]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:03 compute-0 python3.9[241778]: ansible-ansible.builtin.service_facts Invoked
Dec 05 10:06:03 compute-0 network[241795]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 10:06:03 compute-0 network[241796]: 'network-scripts' will be removed from distribution in near future.
Dec 05 10:06:03 compute-0 network[241797]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 10:06:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:03.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:03.563Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:06:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:03.565Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:03 compute-0 ceph-mon[74418]: pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:06:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:04.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:06:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:04 compute-0 sudo[241820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:06:04 compute-0 sudo[241820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:04 compute-0 sudo[241820]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:04 compute-0 sudo[241850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:06:04 compute-0 sudo[241850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 10:06:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 10:06:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:05 compute-0 sudo[241850]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:06:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:05.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:06:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:05] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:06:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:05] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:06:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:06:05 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:06:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:06:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:06:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:06:05 compute-0 ceph-mon[74418]: pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:06:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:06:06 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:06:06 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:06:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:06:06 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:06:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:06:06 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:06:06 compute-0 sudo[241966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:06:06 compute-0 sudo[241966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:06 compute-0 sudo[241966]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:06 compute-0 sudo[241991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:06:06 compute-0 sudo[241991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:06.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:06 compute-0 podman[242058]: 2025-12-05 10:06:06.639738878 +0000 UTC m=+0.062027222 container create cc47157aec362549bc7937f8aaaa74ffcb85a10c6d3908df4beb9e4de8335279 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:06:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:06 compute-0 systemd[1]: Started libpod-conmon-cc47157aec362549bc7937f8aaaa74ffcb85a10c6d3908df4beb9e4de8335279.scope.
Dec 05 10:06:06 compute-0 podman[242058]: 2025-12-05 10:06:06.608994454 +0000 UTC m=+0.031282888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:06:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:06:06 compute-0 podman[242058]: 2025-12-05 10:06:06.735552235 +0000 UTC m=+0.157840569 container init cc47157aec362549bc7937f8aaaa74ffcb85a10c6d3908df4beb9e4de8335279 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_agnesi, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 05 10:06:06 compute-0 podman[242058]: 2025-12-05 10:06:06.745079753 +0000 UTC m=+0.167368097 container start cc47157aec362549bc7937f8aaaa74ffcb85a10c6d3908df4beb9e4de8335279 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 05 10:06:06 compute-0 podman[242058]: 2025-12-05 10:06:06.748911327 +0000 UTC m=+0.171199681 container attach cc47157aec362549bc7937f8aaaa74ffcb85a10c6d3908df4beb9e4de8335279 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 05 10:06:06 compute-0 boring_agnesi[242074]: 167 167
Dec 05 10:06:06 compute-0 systemd[1]: libpod-cc47157aec362549bc7937f8aaaa74ffcb85a10c6d3908df4beb9e4de8335279.scope: Deactivated successfully.
Dec 05 10:06:06 compute-0 conmon[242074]: conmon cc47157aec362549bc79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc47157aec362549bc7937f8aaaa74ffcb85a10c6d3908df4beb9e4de8335279.scope/container/memory.events
Dec 05 10:06:06 compute-0 podman[242058]: 2025-12-05 10:06:06.753968964 +0000 UTC m=+0.176257308 container died cc47157aec362549bc7937f8aaaa74ffcb85a10c6d3908df4beb9e4de8335279 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_agnesi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:06:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-55180afbb0c14e3f535b5404b2d5b2ee8678dc5adac2d5a594bc786afd2f956a-merged.mount: Deactivated successfully.
Dec 05 10:06:06 compute-0 podman[242058]: 2025-12-05 10:06:06.799337044 +0000 UTC m=+0.221625388 container remove cc47157aec362549bc7937f8aaaa74ffcb85a10c6d3908df4beb9e4de8335279 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_agnesi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:06:06 compute-0 systemd[1]: libpod-conmon-cc47157aec362549bc7937f8aaaa74ffcb85a10c6d3908df4beb9e4de8335279.scope: Deactivated successfully.
Dec 05 10:06:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:06:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:06:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:06:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:06:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:06:06 compute-0 ceph-mon[74418]: pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:07 compute-0 podman[242107]: 2025-12-05 10:06:07.040960884 +0000 UTC m=+0.048769053 container create 9e3f1938c711d5b215c860e8db8ed2d7117f7d1504e105d12084bfec11835415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 10:06:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:07.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:07 compute-0 systemd[1]: Started libpod-conmon-9e3f1938c711d5b215c860e8db8ed2d7117f7d1504e105d12084bfec11835415.scope.
Dec 05 10:06:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8adc7832e34e934366fbfa2258e1c0e1857deb4fa2919a1336bd54a08411dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8adc7832e34e934366fbfa2258e1c0e1857deb4fa2919a1336bd54a08411dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8adc7832e34e934366fbfa2258e1c0e1857deb4fa2919a1336bd54a08411dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8adc7832e34e934366fbfa2258e1c0e1857deb4fa2919a1336bd54a08411dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8adc7832e34e934366fbfa2258e1c0e1857deb4fa2919a1336bd54a08411dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:07 compute-0 podman[242107]: 2025-12-05 10:06:07.020441498 +0000 UTC m=+0.028249687 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:06:07 compute-0 podman[242107]: 2025-12-05 10:06:07.138829677 +0000 UTC m=+0.146637846 container init 9e3f1938c711d5b215c860e8db8ed2d7117f7d1504e105d12084bfec11835415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leavitt, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 10:06:07 compute-0 podman[242107]: 2025-12-05 10:06:07.151352616 +0000 UTC m=+0.159160806 container start 9e3f1938c711d5b215c860e8db8ed2d7117f7d1504e105d12084bfec11835415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leavitt, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 10:06:07 compute-0 podman[242107]: 2025-12-05 10:06:07.158624363 +0000 UTC m=+0.166432532 container attach 9e3f1938c711d5b215c860e8db8ed2d7117f7d1504e105d12084bfec11835415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leavitt, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:06:07 compute-0 fervent_leavitt[242130]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:06:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:07 compute-0 fervent_leavitt[242130]: --> All data devices are unavailable
Dec 05 10:06:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:06:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:07.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:06:07 compute-0 systemd[1]: libpod-9e3f1938c711d5b215c860e8db8ed2d7117f7d1504e105d12084bfec11835415.scope: Deactivated successfully.
Dec 05 10:06:07 compute-0 podman[242107]: 2025-12-05 10:06:07.520306227 +0000 UTC m=+0.528114376 container died 9e3f1938c711d5b215c860e8db8ed2d7117f7d1504e105d12084bfec11835415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 10:06:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa8adc7832e34e934366fbfa2258e1c0e1857deb4fa2919a1336bd54a08411dd-merged.mount: Deactivated successfully.
Dec 05 10:06:07 compute-0 podman[242107]: 2025-12-05 10:06:07.707334707 +0000 UTC m=+0.715142876 container remove 9e3f1938c711d5b215c860e8db8ed2d7117f7d1504e105d12084bfec11835415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leavitt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:06:07 compute-0 systemd[1]: libpod-conmon-9e3f1938c711d5b215c860e8db8ed2d7117f7d1504e105d12084bfec11835415.scope: Deactivated successfully.
Dec 05 10:06:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:07 compute-0 sudo[241991]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:07 compute-0 sudo[242208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:06:07 compute-0 sudo[242208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:07 compute-0 sudo[242208]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:07 compute-0 sudo[242233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:06:07 compute-0 sudo[242233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:08.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:08 compute-0 sudo[242439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjpowxhrgjpobrbyhzlwtmrxwocweerx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929168.0678828-2112-182695334648452/AnsiballZ_systemd_service.py'
Dec 05 10:06:08 compute-0 sudo[242439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:08 compute-0 podman[242392]: 2025-12-05 10:06:08.292053317 +0000 UTC m=+0.027760623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:06:08 compute-0 podman[242392]: 2025-12-05 10:06:08.485700457 +0000 UTC m=+0.221407783 container create 648d81d9996350ec70e27cb7557621a906182c3ac52ad870a036b65ad38333b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brahmagupta, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 10:06:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:08 compute-0 python3.9[242441]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:06:08 compute-0 sudo[242439]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:08 compute-0 ceph-mon[74418]: pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:08 compute-0 systemd[1]: Started libpod-conmon-648d81d9996350ec70e27cb7557621a906182c3ac52ad870a036b65ad38333b9.scope.
Dec 05 10:06:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:06:09 compute-0 podman[242392]: 2025-12-05 10:06:09.023563027 +0000 UTC m=+0.759270343 container init 648d81d9996350ec70e27cb7557621a906182c3ac52ad870a036b65ad38333b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brahmagupta, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 10:06:09 compute-0 podman[242392]: 2025-12-05 10:06:09.035019547 +0000 UTC m=+0.770726853 container start 648d81d9996350ec70e27cb7557621a906182c3ac52ad870a036b65ad38333b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:06:09 compute-0 podman[242392]: 2025-12-05 10:06:09.039124528 +0000 UTC m=+0.774831914 container attach 648d81d9996350ec70e27cb7557621a906182c3ac52ad870a036b65ad38333b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brahmagupta, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:06:09 compute-0 sweet_brahmagupta[242544]: 167 167
Dec 05 10:06:09 compute-0 systemd[1]: libpod-648d81d9996350ec70e27cb7557621a906182c3ac52ad870a036b65ad38333b9.scope: Deactivated successfully.
Dec 05 10:06:09 compute-0 podman[242392]: 2025-12-05 10:06:09.0439912 +0000 UTC m=+0.779698506 container died 648d81d9996350ec70e27cb7557621a906182c3ac52ad870a036b65ad38333b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecb6b443d7880e198ee16c2fead08365a1e1c5cf8a2e1f90b9ccc49547e08b76-merged.mount: Deactivated successfully.
Dec 05 10:06:09 compute-0 podman[242392]: 2025-12-05 10:06:09.091996561 +0000 UTC m=+0.827703847 container remove 648d81d9996350ec70e27cb7557621a906182c3ac52ad870a036b65ad38333b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 10:06:09 compute-0 systemd[1]: libpod-conmon-648d81d9996350ec70e27cb7557621a906182c3ac52ad870a036b65ad38333b9.scope: Deactivated successfully.
Dec 05 10:06:09 compute-0 sudo[242613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yipjwdqqgsyoofflrstvvpwccioywtyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929168.8322995-2112-223650515829345/AnsiballZ_systemd_service.py'
Dec 05 10:06:09 compute-0 sudo[242613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:09 compute-0 podman[242621]: 2025-12-05 10:06:09.280397509 +0000 UTC m=+0.053253486 container create 5e0b3591ce293eff0098937f523a3ed82f95d58b2b17c10cbb6ffbe82ad5d5f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_noether, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:06:09 compute-0 systemd[1]: Started libpod-conmon-5e0b3591ce293eff0098937f523a3ed82f95d58b2b17c10cbb6ffbe82ad5d5f3.scope.
Dec 05 10:06:09 compute-0 podman[242621]: 2025-12-05 10:06:09.249594304 +0000 UTC m=+0.022450251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:06:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6ac66eb1a2c25a0a7908a9a0251dd3df1677413bd3142393681b8e28670860/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6ac66eb1a2c25a0a7908a9a0251dd3df1677413bd3142393681b8e28670860/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6ac66eb1a2c25a0a7908a9a0251dd3df1677413bd3142393681b8e28670860/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6ac66eb1a2c25a0a7908a9a0251dd3df1677413bd3142393681b8e28670860/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:09 compute-0 podman[242621]: 2025-12-05 10:06:09.39997349 +0000 UTC m=+0.172829477 container init 5e0b3591ce293eff0098937f523a3ed82f95d58b2b17c10cbb6ffbe82ad5d5f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:06:09 compute-0 podman[242621]: 2025-12-05 10:06:09.411454081 +0000 UTC m=+0.184310008 container start 5e0b3591ce293eff0098937f523a3ed82f95d58b2b17c10cbb6ffbe82ad5d5f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:06:09 compute-0 podman[242621]: 2025-12-05 10:06:09.417004831 +0000 UTC m=+0.189860788 container attach 5e0b3591ce293eff0098937f523a3ed82f95d58b2b17c10cbb6ffbe82ad5d5f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_noether, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 10:06:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.431319) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929169431441, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1472, "num_deletes": 256, "total_data_size": 2984724, "memory_usage": 3026544, "flush_reason": "Manual Compaction"}
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929169461948, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2885059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17778, "largest_seqno": 19249, "table_properties": {"data_size": 2878018, "index_size": 4112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13712, "raw_average_key_size": 19, "raw_value_size": 2864166, "raw_average_value_size": 3989, "num_data_blocks": 178, "num_entries": 718, "num_filter_entries": 718, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929026, "oldest_key_time": 1764929026, "file_creation_time": 1764929169, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 30761 microseconds, and 7414 cpu microseconds.
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:06:09 compute-0 python3.9[242615]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.462048) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2885059 bytes OK
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.462131) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.464914) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.464937) EVENT_LOG_v1 {"time_micros": 1764929169464930, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.464967) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2978388, prev total WAL file size 2978388, number of live WAL files 2.
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.466177) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2817KB)], [38(11MB)]
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929169466480, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14975516, "oldest_snapshot_seqno": -1}
Dec 05 10:06:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:09.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:09 compute-0 sudo[242613]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:09 compute-0 sudo[242644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:06:09 compute-0 sudo[242644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:09 compute-0 sudo[242644]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5134 keys, 14471141 bytes, temperature: kUnknown
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929169627454, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 14471141, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14434386, "index_size": 22821, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 130452, "raw_average_key_size": 25, "raw_value_size": 14338619, "raw_average_value_size": 2792, "num_data_blocks": 938, "num_entries": 5134, "num_filter_entries": 5134, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764929169, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.628760) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 14471141 bytes
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.630642) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.5 rd, 89.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 11.5 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(10.2) write-amplify(5.0) OK, records in: 5664, records dropped: 530 output_compression: NoCompression
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.630685) EVENT_LOG_v1 {"time_micros": 1764929169630667, "job": 18, "event": "compaction_finished", "compaction_time_micros": 161918, "compaction_time_cpu_micros": 37860, "output_level": 6, "num_output_files": 1, "total_output_size": 14471141, "num_input_records": 5664, "num_output_records": 5134, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929169631976, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929169636367, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.466000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.636417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.636423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.636426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.636429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:09 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:06:09.636432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:09 compute-0 adoring_noether[242638]: {
Dec 05 10:06:09 compute-0 adoring_noether[242638]:     "1": [
Dec 05 10:06:09 compute-0 adoring_noether[242638]:         {
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             "devices": [
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "/dev/loop3"
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             ],
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             "lv_name": "ceph_lv0",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             "lv_size": "21470642176",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             "name": "ceph_lv0",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             "tags": {
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.cluster_name": "ceph",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.crush_device_class": "",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.encrypted": "0",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.osd_id": "1",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.type": "block",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.vdo": "0",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:                 "ceph.with_tpm": "0"
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             },
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             "type": "block",
Dec 05 10:06:09 compute-0 adoring_noether[242638]:             "vg_name": "ceph_vg0"
Dec 05 10:06:09 compute-0 adoring_noether[242638]:         }
Dec 05 10:06:09 compute-0 adoring_noether[242638]:     ]
Dec 05 10:06:09 compute-0 adoring_noether[242638]: }
Dec 05 10:06:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:09 compute-0 systemd[1]: libpod-5e0b3591ce293eff0098937f523a3ed82f95d58b2b17c10cbb6ffbe82ad5d5f3.scope: Deactivated successfully.
Dec 05 10:06:09 compute-0 podman[242621]: 2025-12-05 10:06:09.744686004 +0000 UTC m=+0.517541941 container died 5e0b3591ce293eff0098937f523a3ed82f95d58b2b17c10cbb6ffbe82ad5d5f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 05 10:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-db6ac66eb1a2c25a0a7908a9a0251dd3df1677413bd3142393681b8e28670860-merged.mount: Deactivated successfully.
Dec 05 10:06:09 compute-0 podman[242621]: 2025-12-05 10:06:09.803684463 +0000 UTC m=+0.576540400 container remove 5e0b3591ce293eff0098937f523a3ed82f95d58b2b17c10cbb6ffbe82ad5d5f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 10:06:09 compute-0 systemd[1]: libpod-conmon-5e0b3591ce293eff0098937f523a3ed82f95d58b2b17c10cbb6ffbe82ad5d5f3.scope: Deactivated successfully.
Dec 05 10:06:09 compute-0 sudo[242233]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:09 compute-0 sudo[242783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:06:09 compute-0 sudo[242783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:09 compute-0 sudo[242783]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:09 compute-0 sudo[242879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lexdevodbatoisaolsflzzfmlksaqisw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929169.678224-2112-33425001342801/AnsiballZ_systemd_service.py'
Dec 05 10:06:09 compute-0 sudo[242879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:09 compute-0 sudo[242842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:06:10 compute-0 sudo[242842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100610 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:06:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:10 compute-0 python3.9[242884]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:06:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:06:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:10.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:06:10 compute-0 sudo[242879]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:10 compute-0 podman[242927]: 2025-12-05 10:06:10.43660406 +0000 UTC m=+0.054054316 container create 7551bb10169e19855faa01cc3566342167d7463e3f83ecf10554d4a3c93b081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 05 10:06:10 compute-0 systemd[1]: Started libpod-conmon-7551bb10169e19855faa01cc3566342167d7463e3f83ecf10554d4a3c93b081c.scope.
Dec 05 10:06:10 compute-0 podman[242927]: 2025-12-05 10:06:10.41963594 +0000 UTC m=+0.037086196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:06:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:06:10 compute-0 podman[242927]: 2025-12-05 10:06:10.530414013 +0000 UTC m=+0.147864269 container init 7551bb10169e19855faa01cc3566342167d7463e3f83ecf10554d4a3c93b081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:06:10 compute-0 podman[242927]: 2025-12-05 10:06:10.539246372 +0000 UTC m=+0.156696628 container start 7551bb10169e19855faa01cc3566342167d7463e3f83ecf10554d4a3c93b081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 10:06:10 compute-0 podman[242927]: 2025-12-05 10:06:10.542780018 +0000 UTC m=+0.160230284 container attach 7551bb10169e19855faa01cc3566342167d7463e3f83ecf10554d4a3c93b081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:06:10 compute-0 hungry_heisenberg[242990]: 167 167
Dec 05 10:06:10 compute-0 systemd[1]: libpod-7551bb10169e19855faa01cc3566342167d7463e3f83ecf10554d4a3c93b081c.scope: Deactivated successfully.
Dec 05 10:06:10 compute-0 conmon[242990]: conmon 7551bb10169e19855faa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7551bb10169e19855faa01cc3566342167d7463e3f83ecf10554d4a3c93b081c.scope/container/memory.events
Dec 05 10:06:10 compute-0 podman[242927]: 2025-12-05 10:06:10.545951054 +0000 UTC m=+0.163401300 container died 7551bb10169e19855faa01cc3566342167d7463e3f83ecf10554d4a3c93b081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 05 10:06:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-744116ca966bff317665e9be1c4514693ca279714e173e3cbdedac85f9aac6c0-merged.mount: Deactivated successfully.
Dec 05 10:06:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:06:10 compute-0 podman[242927]: 2025-12-05 10:06:10.578577069 +0000 UTC m=+0.196027295 container remove 7551bb10169e19855faa01cc3566342167d7463e3f83ecf10554d4a3c93b081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:06:10 compute-0 systemd[1]: libpod-conmon-7551bb10169e19855faa01cc3566342167d7463e3f83ecf10554d4a3c93b081c.scope: Deactivated successfully.
Dec 05 10:06:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:10 compute-0 podman[243088]: 2025-12-05 10:06:10.752354279 +0000 UTC m=+0.049618626 container create 296bd764221f27eef502e9e66b26671b192d3a99634ca8bd6db705805a5b7252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:06:10 compute-0 sudo[243131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbshdirvvrbsikisqakwnteyyntwnqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929170.488874-2112-210926614991958/AnsiballZ_systemd_service.py'
Dec 05 10:06:10 compute-0 sudo[243131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:10 compute-0 systemd[1]: Started libpod-conmon-296bd764221f27eef502e9e66b26671b192d3a99634ca8bd6db705805a5b7252.scope.
Dec 05 10:06:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b919de648b389982704275f14e8c2317e0d1f3ddf92998c293281675ecb79c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b919de648b389982704275f14e8c2317e0d1f3ddf92998c293281675ecb79c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b919de648b389982704275f14e8c2317e0d1f3ddf92998c293281675ecb79c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b919de648b389982704275f14e8c2317e0d1f3ddf92998c293281675ecb79c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:10 compute-0 podman[243088]: 2025-12-05 10:06:10.737721783 +0000 UTC m=+0.034986150 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:06:10 compute-0 podman[243088]: 2025-12-05 10:06:10.838407492 +0000 UTC m=+0.135671859 container init 296bd764221f27eef502e9e66b26671b192d3a99634ca8bd6db705805a5b7252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:06:10 compute-0 podman[243088]: 2025-12-05 10:06:10.850746596 +0000 UTC m=+0.148010943 container start 296bd764221f27eef502e9e66b26671b192d3a99634ca8bd6db705805a5b7252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:06:10 compute-0 podman[243088]: 2025-12-05 10:06:10.854541909 +0000 UTC m=+0.151806276 container attach 296bd764221f27eef502e9e66b26671b192d3a99634ca8bd6db705805a5b7252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:06:11 compute-0 python3.9[243133]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:06:11 compute-0 sudo[243131]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:11 compute-0 lvm[243305]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:06:11 compute-0 lvm[243305]: VG ceph_vg0 finished
Dec 05 10:06:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:11.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:11 compute-0 eager_kepler[243136]: {}
Dec 05 10:06:11 compute-0 systemd[1]: libpod-296bd764221f27eef502e9e66b26671b192d3a99634ca8bd6db705805a5b7252.scope: Deactivated successfully.
Dec 05 10:06:11 compute-0 podman[243088]: 2025-12-05 10:06:11.567444754 +0000 UTC m=+0.864709111 container died 296bd764221f27eef502e9e66b26671b192d3a99634ca8bd6db705805a5b7252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:06:11 compute-0 systemd[1]: libpod-296bd764221f27eef502e9e66b26671b192d3a99634ca8bd6db705805a5b7252.scope: Consumed 1.069s CPU time.
Dec 05 10:06:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6b919de648b389982704275f14e8c2317e0d1f3ddf92998c293281675ecb79c-merged.mount: Deactivated successfully.
Dec 05 10:06:11 compute-0 podman[243088]: 2025-12-05 10:06:11.608449356 +0000 UTC m=+0.905713733 container remove 296bd764221f27eef502e9e66b26671b192d3a99634ca8bd6db705805a5b7252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:06:11 compute-0 systemd[1]: libpod-conmon-296bd764221f27eef502e9e66b26671b192d3a99634ca8bd6db705805a5b7252.scope: Deactivated successfully.
Dec 05 10:06:11 compute-0 ceph-mon[74418]: pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:06:11 compute-0 sudo[242842]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:06:11 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:06:11 compute-0 sudo[243376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jozwcnsgajrlfyjrhfdpbiwtwivfcyas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929171.2768571-2112-242884653576084/AnsiballZ_systemd_service.py'
Dec 05 10:06:11 compute-0 sudo[243376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:11 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:11 compute-0 sudo[243379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:06:11 compute-0 sudo[243379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:11 compute-0 sudo[243379]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:11 compute-0 python3.9[243378]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:06:12 compute-0 sudo[243376]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:12.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:12 compute-0 sudo[243556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzkajmhicaakjobuojbdqfriegxwsdrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929172.1561925-2112-80967793461157/AnsiballZ_systemd_service.py'
Dec 05 10:06:12 compute-0 sudo[243556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:06:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:06:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb40023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:06:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:06:12 compute-0 python3.9[243558]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:06:12 compute-0 sudo[243556]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:13 compute-0 sudo[243709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpngoolhqmvwnrqpuiegmeqosxvfvawr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929172.9421313-2112-3063210069778/AnsiballZ_systemd_service.py'
Dec 05 10:06:13 compute-0 sudo[243709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:06:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:13.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:06:13 compute-0 python3.9[243711]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:06:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:13.566Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:13 compute-0 sudo[243709]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:13 compute-0 ceph-mon[74418]: pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb40023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:14.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:14 compute-0 sudo[243863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouqvhkhoixulcvwkfycmtqgdgnjhsmxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929173.7695343-2112-272974094734792/AnsiballZ_systemd_service.py'
Dec 05 10:06:14 compute-0 sudo[243863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:06:14 compute-0 python3.9[243866]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:06:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:14 compute-0 sudo[243863]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:15 compute-0 sudo[244017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsauumzlnilzjqousdepturkigwpwyef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929175.1622584-2289-871591078421/AnsiballZ_file.py'
Dec 05 10:06:15 compute-0 sudo[244017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:15.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:15] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 10:06:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:15] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 10:06:15 compute-0 python3.9[244019]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:15 compute-0 sudo[244017]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:15 compute-0 ceph-mon[74418]: pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:06:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb40023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:16 compute-0 sudo[244170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gemyhsammjuosmvngjcpoppjswwhflfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929175.883186-2289-60294374962537/AnsiballZ_file.py'
Dec 05 10:06:16 compute-0 sudo[244170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:16.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:16 compute-0 python3.9[244172]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:16 compute-0 sudo[244170]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:06:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:16 compute-0 sudo[244333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxlbfegoayyulgepseiuljfizpxqgfsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929176.5160863-2289-202380005439462/AnsiballZ_file.py'
Dec 05 10:06:16 compute-0 sudo[244333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:16 compute-0 podman[244297]: 2025-12-05 10:06:16.847919893 +0000 UTC m=+0.080160965 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 05 10:06:17 compute-0 python3.9[244341]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:17 compute-0 sudo[244333]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:17.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:17 compute-0 sudo[244494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjezxklookteycyklbypcmnwhmfhsosc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929177.174134-2289-46579250194161/AnsiballZ_file.py'
Dec 05 10:06:17 compute-0 sudo[244494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:06:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:17.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:06:17 compute-0 python3.9[244496]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:17 compute-0 sudo[244494]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:17 compute-0 ceph-mon[74418]: pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:06:18 compute-0 sudo[244647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovfhlfjtsubucdlztrqliattpljrxijz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929177.8546324-2289-150512836647411/AnsiballZ_file.py'
Dec 05 10:06:18 compute-0 sudo[244647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb40023b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:18.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:18 compute-0 python3.9[244649]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:18 compute-0 sudo[244647]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:06:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:18 compute-0 sudo[244800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gncysxetsqpwvevwrmekdveyrfcilrhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929178.639515-2289-149423823424777/AnsiballZ_file.py'
Dec 05 10:06:18 compute-0 sudo[244800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:19 compute-0 ceph-mon[74418]: pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:06:19 compute-0 python3.9[244802]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:19 compute-0 sudo[244800]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:19 compute-0 sudo[244952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tusgdnxnrtakregwtropdoraxtmcwqdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929179.3239331-2289-173895853308754/AnsiballZ_file.py'
Dec 05 10:06:19 compute-0 sudo[244952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:06:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:19 compute-0 python3.9[244954]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:19 compute-0 sudo[244952]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:20 compute-0 sudo[245105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haiqrttcxcgjfnyttqtqduaqktaytegh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929180.017547-2289-244812571987114/AnsiballZ_file.py'
Dec 05 10:06:20 compute-0 sudo[245105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:20.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:20 compute-0 python3.9[245107]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:20 compute-0 sudo[245105]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:06:20.561 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:06:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:06:20.562 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:06:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:06:20.562 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:06:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:06:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:20 compute-0 sudo[245258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlvnucjmttffurdxktpxnowxoozlsufj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929180.6762772-2460-187381332736629/AnsiballZ_file.py'
Dec 05 10:06:20 compute-0 sudo[245258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:21 compute-0 python3.9[245260]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:21 compute-0 sudo[245258]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:21.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:21 compute-0 sudo[245410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrttuntpkvsnpjkbvhfyfmcfgctqiivc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929181.2669754-2460-232104798387761/AnsiballZ_file.py'
Dec 05 10:06:21 compute-0 sudo[245410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:21 compute-0 ceph-mon[74418]: pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:06:21 compute-0 python3.9[245412]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:21 compute-0 sudo[245410]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:21 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:22 compute-0 sudo[245563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reenrhbgqzkxozxljiiregbgwicrgxeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929181.8828442-2460-229371070587864/AnsiballZ_file.py'
Dec 05 10:06:22 compute-0 sudo[245563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:22.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:22 compute-0 python3.9[245565]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:22 compute-0 sudo[245563]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:06:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:06:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:06:23 compute-0 sudo[245716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqikfmqxisrbeeizvmxdgmwdbtviofoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929182.5016809-2460-147489153462322/AnsiballZ_file.py'
Dec 05 10:06:23 compute-0 sudo[245716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:23 compute-0 python3.9[245718]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:23 compute-0 sudo[245716]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:23.568Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:06:23 compute-0 ceph-mon[74418]: pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:06:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:06:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:24.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:06:24 compute-0 sudo[245869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrbnjvcgchhtpfvoejrqnwnzkymznsos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929183.5449057-2460-182645274703058/AnsiballZ_file.py'
Dec 05 10:06:24 compute-0 sudo[245869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:24 compute-0 python3.9[245872]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:24 compute-0 sudo[245869]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Dec 05 10:06:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:24 compute-0 ceph-mon[74418]: pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Dec 05 10:06:25 compute-0 sudo[246022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lekkhsxuwpxffcbqblrktkvhefhzmzex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929184.6848345-2460-9687454091241/AnsiballZ_file.py'
Dec 05 10:06:25 compute-0 sudo[246022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:25 compute-0 python3.9[246024]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:25 compute-0 sudo[246022]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:25.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:25] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 10:06:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:25] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec 05 10:06:25 compute-0 sudo[246188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecmwmcgjhqfbjfffnqbbbhvmawsarijj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929185.3661957-2460-183044220359178/AnsiballZ_file.py'
Dec 05 10:06:25 compute-0 sudo[246188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:25 compute-0 podman[246148]: 2025-12-05 10:06:25.653361075 +0000 UTC m=+0.061190420 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:06:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:25 compute-0 python3.9[246194]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:25 compute-0 sudo[246188]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:26.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:26 compute-0 sudo[246346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyyaypavdxtknlqkucumfcknjeghdjre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929186.0806282-2460-25378599830156/AnsiballZ_file.py'
Dec 05 10:06:26 compute-0 sudo[246346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 05 10:06:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:26 compute-0 python3.9[246348]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:06:26 compute-0 sudo[246346]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:06:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:27.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:06:27
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['vms', 'volumes', 'backups', '.mgr', 'default.rgw.meta', '.rgw.root', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.log']
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:06:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:27.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:06:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:06:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:06:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:06:27 compute-0 ceph-mon[74418]: pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 05 10:06:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:06:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:28.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:28 compute-0 sudo[246500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djpiriapqwjavebeegyltkannxciblsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929188.2463553-2634-218493503904712/AnsiballZ_command.py'
Dec 05 10:06:28 compute-0 sudo[246500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 05 10:06:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:28 compute-0 python3.9[246502]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:06:28 compute-0 sudo[246500]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:29 compute-0 ceph-mon[74418]: pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 05 10:06:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:06:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:29.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:06:29 compute-0 sudo[246655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:06:29 compute-0 sudo[246655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:29 compute-0 sudo[246655]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:29 compute-0 python3.9[246654]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 10:06:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:30.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:30 compute-0 sudo[246831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzqjylfgphcxcqggvhekyoqqrxjmfemb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929190.0850904-2688-82693967760918/AnsiballZ_systemd_service.py'
Dec 05 10:06:30 compute-0 sudo[246831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:06:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:30 compute-0 python3.9[246833]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 10:06:30 compute-0 systemd[1]: Reloading.
Dec 05 10:06:30 compute-0 systemd-sysv-generator[246863]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:06:30 compute-0 systemd-rc-local-generator[246857]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:06:31 compute-0 sudo[246831]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:31 compute-0 podman[246869]: 2025-12-05 10:06:31.137163696 +0000 UTC m=+0.078512279 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:06:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:31.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:31 compute-0 ceph-mon[74418]: pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:06:31 compute-0 sudo[247044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weuksalhcbskwiajsfokfiyzhjkcikfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929191.369627-2712-72515588673974/AnsiballZ_command.py'
Dec 05 10:06:31 compute-0 sudo[247044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:31 compute-0 python3.9[247046]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:06:31 compute-0 sudo[247044]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100632 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:06:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:32.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:32 compute-0 sudo[247199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flgsagolcglhppyabyitaihphevysmeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929192.0805483-2712-145640625437083/AnsiballZ_command.py'
Dec 05 10:06:32 compute-0 sudo[247199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:06:32 compute-0 python3.9[247201]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:06:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:32 compute-0 sudo[247199]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:33 compute-0 sudo[247352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgqspktrylxlxgjmkvrrcwqtelwwafle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929192.8322694-2712-113259737670063/AnsiballZ_command.py'
Dec 05 10:06:33 compute-0 sudo[247352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:33 compute-0 python3.9[247354]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:06:33 compute-0 sudo[247352]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:06:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:33.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:06:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:33.569Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:06:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:33.570Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:33 compute-0 sudo[247505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kclbexhqlmvcrmxuguclszrdpxdwhgnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929193.461851-2712-213655115121529/AnsiballZ_command.py'
Dec 05 10:06:33 compute-0 sudo[247505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:33 compute-0 ceph-mon[74418]: pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:06:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:33 compute-0 python3.9[247507]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:06:33 compute-0 sudo[247505]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:34.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:34 compute-0 sudo[247661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htcldolpbptnhcgxhilprukmqpqwxdoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929194.096637-2712-35372709628089/AnsiballZ_command.py'
Dec 05 10:06:34 compute-0 sudo[247661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:06:34 compute-0 python3.9[247663]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:06:34 compute-0 sudo[247661]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:35 compute-0 sudo[247814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkicrmpotmvzuqrltraqnqxpqvjlcdvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929195.0951781-2712-251980420880229/AnsiballZ_command.py'
Dec 05 10:06:35 compute-0 sudo[247814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:35.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:35 compute-0 python3.9[247816]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:06:35 compute-0 ceph-mon[74418]: pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:06:35 compute-0 sudo[247814]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:35] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Dec 05 10:06:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:35] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Dec 05 10:06:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:36 compute-0 sudo[247969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkddqycplkqhrwmrlfmtfudxipdlnjvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929195.7568986-2712-119004087607237/AnsiballZ_command.py'
Dec 05 10:06:36 compute-0 sudo[247969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:36 compute-0 python3.9[247971]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:06:36 compute-0 sudo[247969]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:06:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:36 compute-0 sudo[248123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvhqwgbgojokzyxenhdnbtgguolpvulr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929196.6303642-2712-140252205029151/AnsiballZ_command.py'
Dec 05 10:06:36 compute-0 sudo[248123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:37 compute-0 python3.9[248125]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:06:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:37.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:37 compute-0 sudo[248123]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:37.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:37 compute-0 ceph-mon[74418]: pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:06:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:38.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:06:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:39 compute-0 sudo[248278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vodeciugcvvsurdwqhuwghsjnptcanlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929199.0956488-2919-185195180272609/AnsiballZ_file.py'
Dec 05 10:06:39 compute-0 sudo[248278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:39.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:39 compute-0 python3.9[248280]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:39 compute-0 ceph-mon[74418]: pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:06:39 compute-0 sudo[248278]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:40 compute-0 sudo[248431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjjcmdxpwpyqrnrvbumqxvvennxbpjvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929199.8378527-2919-56272749716052/AnsiballZ_file.py'
Dec 05 10:06:40 compute-0 sudo[248431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:40.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:40 compute-0 python3.9[248433]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:40 compute-0 sudo[248431]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec 05 10:06:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:40 compute-0 sudo[248584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcsxgzujymtdwykdxzszxbubvhyubzyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929200.5323403-2919-269463304068379/AnsiballZ_file.py'
Dec 05 10:06:40 compute-0 sudo[248584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:40 compute-0 python3.9[248586]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:40 compute-0 sudo[248584]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:41 compute-0 sudo[248736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eclbvvjzfwblpyepgljisyruadnwefdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929201.251777-2985-256674879053243/AnsiballZ_file.py'
Dec 05 10:06:41 compute-0 sudo[248736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:41.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:41 compute-0 python3.9[248738]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:41 compute-0 sudo[248736]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:42 compute-0 ceph-mon[74418]: pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec 05 10:06:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:42 compute-0 sudo[248889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlsqidgpzcnpjxifhnbwpamocudlrpgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929202.034383-2985-4880402222580/AnsiballZ_file.py'
Dec 05 10:06:42 compute-0 sudo[248889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:06:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:42.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:06:42 compute-0 python3.9[248891]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:42 compute-0 sudo[248889]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:06:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:06:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:06:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc008dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:43 compute-0 sudo[249042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaupixjjzdmwvekzqpiwbpdvbmjneusp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929202.709458-2985-133438137941895/AnsiballZ_file.py'
Dec 05 10:06:43 compute-0 sudo[249042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:06:43 compute-0 ceph-mon[74418]: pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:06:43 compute-0 python3.9[249044]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:43 compute-0 sudo[249042]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:43.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:43.570Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:43 compute-0 sudo[249194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idjircsgyhdxysvgewtkcrztnukykwnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929203.436784-2985-188025024078352/AnsiballZ_file.py'
Dec 05 10:06:43 compute-0 sudo[249194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:43 compute-0 python3.9[249196]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:43 compute-0 sudo[249194]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:44 compute-0 sudo[249348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evfmnewejxdtlbbridqferwwtzzraabp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929204.1113222-2985-219807943975941/AnsiballZ_file.py'
Dec 05 10:06:44 compute-0 sudo[249348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:44 compute-0 python3.9[249350]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:06:44 compute-0 sudo[249348]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:45 compute-0 sudo[249500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhigkinqmmmqsnbjbmybujalajvsknrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929204.738916-2985-52585556156512/AnsiballZ_file.py'
Dec 05 10:06:45 compute-0 sudo[249500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:45 compute-0 python3.9[249502]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:45 compute-0 sudo[249500]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:06:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:45.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:06:45 compute-0 sudo[249652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugyakuwnxduypvuhlskgnvxdwounjnjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929205.3744578-2985-10970643636231/AnsiballZ_file.py'
Dec 05 10:06:45 compute-0 sudo[249652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:45] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:06:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:45] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:06:45 compute-0 ceph-mon[74418]: pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:06:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc008dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:45 compute-0 python3.9[249654]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:45 compute-0 sudo[249652]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:46.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc008dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:47.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:47 compute-0 podman[249681]: 2025-12-05 10:06:47.413819631 +0000 UTC m=+0.067912802 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 05 10:06:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:47.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:47 compute-0 ceph-mon[74418]: pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:48.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:49 compute-0 ceph-mon[74418]: pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:49.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:49 compute-0 sudo[249703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:06:49 compute-0 sudo[249703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:06:49 compute-0 sudo[249703]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:50.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:06:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:51.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:51 compute-0 ceph-mon[74418]: pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:06:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:51 compute-0 sudo[249855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phwaiiqaedgpisrvslrnzdvaqkfbfepq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929211.406503-3310-39802087760348/AnsiballZ_getent.py'
Dec 05 10:06:51 compute-0 sudo[249855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:52 compute-0 python3.9[249857]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 05 10:06:52 compute-0 sudo[249855]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:52.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:52 compute-0 sudo[250010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egnrhykjbqfjjbasbarfsambzxoogyhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929212.2164528-3334-190766636840775/AnsiballZ_group.py'
Dec 05 10:06:52 compute-0 sudo[250010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:52 compute-0 python3.9[250012]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 10:06:52 compute-0 groupadd[250013]: group added to /etc/group: name=nova, GID=42436
Dec 05 10:06:52 compute-0 groupadd[250013]: group added to /etc/gshadow: name=nova
Dec 05 10:06:52 compute-0 groupadd[250013]: new group: name=nova, GID=42436
Dec 05 10:06:52 compute-0 ceph-mon[74418]: pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:52 compute-0 sudo[250010]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:53.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:53.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:53 compute-0 sudo[250168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txnefymkltrsfkuwzjblshpmcirtpssl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929213.1623576-3358-4266323918781/AnsiballZ_user.py'
Dec 05 10:06:53 compute-0 sudo[250168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:06:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003b00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:53 compute-0 python3.9[250170]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 10:06:54 compute-0 useradd[250172]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 05 10:06:54 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:06:54 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:06:54 compute-0 useradd[250172]: add 'nova' to group 'libvirt'
Dec 05 10:06:54 compute-0 useradd[250172]: add 'nova' to shadow group 'libvirt'
Dec 05 10:06:54 compute-0 sudo[250168]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:54.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:06:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:55 compute-0 sshd-session[250206]: Accepted publickey for zuul from 192.168.122.30 port 53000 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 10:06:55 compute-0 systemd-logind[789]: New session 55 of user zuul.
Dec 05 10:06:55 compute-0 systemd[1]: Started Session 55 of User zuul.
Dec 05 10:06:55 compute-0 sshd-session[250206]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 10:06:55 compute-0 sshd-session[250209]: Received disconnect from 192.168.122.30 port 53000:11: disconnected by user
Dec 05 10:06:55 compute-0 sshd-session[250209]: Disconnected from user zuul 192.168.122.30 port 53000
Dec 05 10:06:55 compute-0 sshd-session[250206]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:06:55 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Dec 05 10:06:55 compute-0 systemd-logind[789]: Session 55 logged out. Waiting for processes to exit.
Dec 05 10:06:55 compute-0 systemd-logind[789]: Removed session 55.
Dec 05 10:06:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:55.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:55] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:06:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:06:55] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:06:55 compute-0 ceph-mon[74418]: pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:06:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:55 compute-0 podman[250333]: 2025-12-05 10:06:55.83666159 +0000 UTC m=+0.062223168 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:06:55 compute-0 python3.9[250370]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:06:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003b20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:06:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:56.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:06:56 compute-0 python3.9[250502]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764929215.5482292-3433-154546195334946/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:06:57.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:06:57 compute-0 python3.9[250652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:06:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:06:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:06:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:06:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:06:57 compute-0 python3.9[250728]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:06:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:06:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:06:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:06:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:06:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:06:57 compute-0 ceph-mon[74418]: pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:06:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:58 compute-0 python3.9[250878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:06:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:06:58.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:58 compute-0 python3.9[251001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764929217.7758107-3433-236943851848362/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:06:58 compute-0 ceph-mon[74418]: pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:06:59 compute-0 python3.9[251151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:06:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:06:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:06:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:06:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:06:59.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:06:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:06:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:06:59 compute-0 python3.9[251272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764929218.9201033-3433-99344067253024/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:07:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:07:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:00.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:07:00 compute-0 python3.9[251424]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:07:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:07:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:01 compute-0 python3.9[251546]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764929219.9987295-3433-270779308657326/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:07:01 compute-0 podman[251623]: 2025-12-05 10:07:01.43003608 +0000 UTC m=+0.090812772 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:07:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:01.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:01 compute-0 python3.9[251722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:07:01 compute-0 ceph-mon[74418]: pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:07:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:02 compute-0 python3.9[251843]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764929221.2515624-3433-171879611322766/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:07:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:02.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:03 compute-0 sudo[251995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bysncbshsidphkwpvtdxfacpsxgvjqut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929222.9776962-3682-206511731494949/AnsiballZ_file.py'
Dec 05 10:07:03 compute-0 sudo[251995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:03 compute-0 python3.9[251997]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:07:03 compute-0 sudo[251995]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:03.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:03.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:04 compute-0 sudo[252147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtibflxestuoysclhnainnxndzaxdkce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929223.7565343-3706-238024791774050/AnsiballZ_copy.py'
Dec 05 10:07:04 compute-0 sudo[252147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:04.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:07:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 op/s
Dec 05 10:07:04 compute-0 ceph-mon[74418]: pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:04 compute-0 python3.9[252149]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:07:04 compute-0 sudo[252147]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:05 compute-0 sudo[252301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vblarrxjxnoojzawrnsqrwtsmoxwyejb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929224.9469519-3730-252275383104718/AnsiballZ_stat.py'
Dec 05 10:07:05 compute-0 sudo[252301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:05.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:05 compute-0 python3.9[252303]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:07:05 compute-0 sudo[252301]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:05] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:07:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:05] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:07:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:06 compute-0 sudo[252453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsaykkbrpzeonfhtwzilhhvcqdalewkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929225.8336723-3754-33992389031790/AnsiballZ_stat.py'
Dec 05 10:07:06 compute-0 sudo[252453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:06 compute-0 python3.9[252455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:07:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:06 compute-0 sudo[252453]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:06 compute-0 ceph-mon[74418]: pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 op/s
Dec 05 10:07:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:06 compute-0 sudo[252578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqakslvrpjrcqlnudxsyokrnidswhdgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929225.8336723-3754-33992389031790/AnsiballZ_copy.py'
Dec 05 10:07:06 compute-0 sudo[252578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:06 compute-0 python3.9[252580]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764929225.8336723-3754-33992389031790/.source _original_basename=.kp0mby3e follow=False checksum=9db6bb749b17e549fc7edf8b70660d02e5d8e6e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 05 10:07:06 compute-0 sudo[252578]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:07.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:07 compute-0 ceph-mon[74418]: pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:07.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:07 compute-0 python3.9[252732]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:07:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:08 compute-0 python3.9[252886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:07:09 compute-0 python3.9[253007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764929228.238057-3832-83091492379236/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=81f1f28d070b2613355f782b83a5777fdba9540e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:07:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:07:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:07:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:09.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:07:09 compute-0 ceph-mon[74418]: pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:09 compute-0 sudo[253128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:07:09 compute-0 sudo[253128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:09 compute-0 sudo[253128]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:10 compute-0 python3.9[253182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 10:07:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:10.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:10 compute-0 python3.9[253305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764929229.5399835-3877-33609999986929/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=2efe6ae78bce1c26d2c384be079fa366810076ad backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 10:07:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:07:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:11 compute-0 sudo[253455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puepeuukpuxrrdjoxtypmwyjaztoswui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929230.9616776-3928-267036120037102/AnsiballZ_container_config_data.py'
Dec 05 10:07:11 compute-0 sudo[253455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:11 compute-0 python3.9[253457]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 05 10:07:11 compute-0 sudo[253455]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:11.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:11 compute-0 ceph-mon[74418]: pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:07:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:11 compute-0 sudo[253505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:07:11 compute-0 sudo[253505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:11 compute-0 sudo[253505]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:12 compute-0 sudo[253553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:07:12 compute-0 sudo[253553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:12 compute-0 sudo[253658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkxgtfpbmovejayavkwpkmkcnwyqutbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929231.96551-3955-280792843833155/AnsiballZ_container_config_hash.py'
Dec 05 10:07:12 compute-0 sudo[253658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:12.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:12 compute-0 python3.9[253660]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 10:07:12 compute-0 sudo[253658]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:07:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:07:12 compute-0 sudo[253553]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:07:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:07:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:07:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:07:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:07:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:07:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:07:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:07:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:07:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:07:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:07:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:07:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:07:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:07:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:07:12 compute-0 sudo[253715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:07:12 compute-0 sudo[253715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:12 compute-0 sudo[253715]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:12 compute-0 sudo[253740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:07:12 compute-0 sudo[253740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:13 compute-0 sudo[253914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nivjntnmhjznqmnzsjtqyuwibqgvorcj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764929232.9365478-3985-34557703689884/AnsiballZ_edpm_container_manage.py'
Dec 05 10:07:13 compute-0 sudo[253914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:13 compute-0 podman[253931]: 2025-12-05 10:07:13.32239874 +0000 UTC m=+0.053912682 container create 5b9ed8310ed48ff82369f8be463dd0023f0622118faf5f45d372d68aca2519c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 05 10:07:13 compute-0 systemd[1]: Started libpod-conmon-5b9ed8310ed48ff82369f8be463dd0023f0622118faf5f45d372d68aca2519c3.scope.
Dec 05 10:07:13 compute-0 podman[253931]: 2025-12-05 10:07:13.301539885 +0000 UTC m=+0.033053867 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:07:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:07:13 compute-0 podman[253931]: 2025-12-05 10:07:13.413315875 +0000 UTC m=+0.144829827 container init 5b9ed8310ed48ff82369f8be463dd0023f0622118faf5f45d372d68aca2519c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:07:13 compute-0 podman[253931]: 2025-12-05 10:07:13.422987127 +0000 UTC m=+0.154501069 container start 5b9ed8310ed48ff82369f8be463dd0023f0622118faf5f45d372d68aca2519c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_taussig, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Dec 05 10:07:13 compute-0 podman[253931]: 2025-12-05 10:07:13.426704888 +0000 UTC m=+0.158218850 container attach 5b9ed8310ed48ff82369f8be463dd0023f0622118faf5f45d372d68aca2519c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:13 compute-0 vibrant_taussig[253948]: 167 167
Dec 05 10:07:13 compute-0 systemd[1]: libpod-5b9ed8310ed48ff82369f8be463dd0023f0622118faf5f45d372d68aca2519c3.scope: Deactivated successfully.
Dec 05 10:07:13 compute-0 conmon[253948]: conmon 5b9ed8310ed48ff82369 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b9ed8310ed48ff82369f8be463dd0023f0622118faf5f45d372d68aca2519c3.scope/container/memory.events
Dec 05 10:07:13 compute-0 podman[253931]: 2025-12-05 10:07:13.432799903 +0000 UTC m=+0.164313865 container died 5b9ed8310ed48ff82369f8be463dd0023f0622118faf5f45d372d68aca2519c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_taussig, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-efc1c37c627430f08f22608d53177b0fcccafe8fb5eb1a3914412ee9987034b9-merged.mount: Deactivated successfully.
Dec 05 10:07:13 compute-0 python3[253917]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 10:07:13 compute-0 podman[253931]: 2025-12-05 10:07:13.485772439 +0000 UTC m=+0.217286381 container remove 5b9ed8310ed48ff82369f8be463dd0023f0622118faf5f45d372d68aca2519c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_taussig, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:13 compute-0 systemd[1]: libpod-conmon-5b9ed8310ed48ff82369f8be463dd0023f0622118faf5f45d372d68aca2519c3.scope: Deactivated successfully.
Dec 05 10:07:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:13.573Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:13.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:13 compute-0 podman[253992]: 2025-12-05 10:07:13.644355877 +0000 UTC m=+0.038755742 container create 806b6025967e74718ac33a3ad34f18cb319863ff093060cf8b65c47fd4d62d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:07:13 compute-0 systemd[1]: Started libpod-conmon-806b6025967e74718ac33a3ad34f18cb319863ff093060cf8b65c47fd4d62d8c.scope.
Dec 05 10:07:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586c81ac055cb35fd9cb528cf57a99e3714dcb5f73c84334abb035291d54e9fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586c81ac055cb35fd9cb528cf57a99e3714dcb5f73c84334abb035291d54e9fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586c81ac055cb35fd9cb528cf57a99e3714dcb5f73c84334abb035291d54e9fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586c81ac055cb35fd9cb528cf57a99e3714dcb5f73c84334abb035291d54e9fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586c81ac055cb35fd9cb528cf57a99e3714dcb5f73c84334abb035291d54e9fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:13 compute-0 podman[253992]: 2025-12-05 10:07:13.627144151 +0000 UTC m=+0.021544036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:07:13 compute-0 podman[253992]: 2025-12-05 10:07:13.737015019 +0000 UTC m=+0.131414904 container init 806b6025967e74718ac33a3ad34f18cb319863ff093060cf8b65c47fd4d62d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 10:07:13 compute-0 podman[253992]: 2025-12-05 10:07:13.746372193 +0000 UTC m=+0.140772058 container start 806b6025967e74718ac33a3ad34f18cb319863ff093060cf8b65c47fd4d62d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_visvesvaraya, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 10:07:13 compute-0 ceph-mon[74418]: pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:07:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:07:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:07:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:07:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:07:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:07:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:07:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:14 compute-0 stupefied_visvesvaraya[254012]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:07:14 compute-0 stupefied_visvesvaraya[254012]: --> All data devices are unavailable
Dec 05 10:07:14 compute-0 systemd[1]: libpod-806b6025967e74718ac33a3ad34f18cb319863ff093060cf8b65c47fd4d62d8c.scope: Deactivated successfully.
Dec 05 10:07:14 compute-0 podman[253992]: 2025-12-05 10:07:14.265211837 +0000 UTC m=+0.659611752 container attach 806b6025967e74718ac33a3ad34f18cb319863ff093060cf8b65c47fd4d62d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 10:07:14 compute-0 podman[253992]: 2025-12-05 10:07:14.266964365 +0000 UTC m=+0.661364240 container died 806b6025967e74718ac33a3ad34f18cb319863ff093060cf8b65c47fd4d62d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_visvesvaraya, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 05 10:07:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-586c81ac055cb35fd9cb528cf57a99e3714dcb5f73c84334abb035291d54e9fc-merged.mount: Deactivated successfully.
Dec 05 10:07:14 compute-0 podman[253992]: 2025-12-05 10:07:14.315002357 +0000 UTC m=+0.709402222 container remove 806b6025967e74718ac33a3ad34f18cb319863ff093060cf8b65c47fd4d62d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:07:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:14 compute-0 systemd[1]: libpod-conmon-806b6025967e74718ac33a3ad34f18cb319863ff093060cf8b65c47fd4d62d8c.scope: Deactivated successfully.
Dec 05 10:07:14 compute-0 sudo[253740]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:14.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:14 compute-0 sudo[254042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:07:14 compute-0 sudo[254042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:14 compute-0 sudo[254042]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:07:14 compute-0 sudo[254067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:07:14 compute-0 sudo[254067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:07:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:14 compute-0 podman[254135]: 2025-12-05 10:07:14.883085086 +0000 UTC m=+0.051029585 container create 7c4b69b7eca68b6d91897b132fd94cb76ff87e90ebb0536d1508819ada977b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 10:07:14 compute-0 systemd[1]: Started libpod-conmon-7c4b69b7eca68b6d91897b132fd94cb76ff87e90ebb0536d1508819ada977b40.scope.
Dec 05 10:07:14 compute-0 podman[254135]: 2025-12-05 10:07:14.865040387 +0000 UTC m=+0.032984916 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:07:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:07:14 compute-0 podman[254135]: 2025-12-05 10:07:14.979766137 +0000 UTC m=+0.147710646 container init 7c4b69b7eca68b6d91897b132fd94cb76ff87e90ebb0536d1508819ada977b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 05 10:07:14 compute-0 podman[254135]: 2025-12-05 10:07:14.98580872 +0000 UTC m=+0.153753229 container start 7c4b69b7eca68b6d91897b132fd94cb76ff87e90ebb0536d1508819ada977b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:07:14 compute-0 podman[254135]: 2025-12-05 10:07:14.989166301 +0000 UTC m=+0.157110790 container attach 7c4b69b7eca68b6d91897b132fd94cb76ff87e90ebb0536d1508819ada977b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 10:07:14 compute-0 youthful_poincare[254151]: 167 167
Dec 05 10:07:14 compute-0 systemd[1]: libpod-7c4b69b7eca68b6d91897b132fd94cb76ff87e90ebb0536d1508819ada977b40.scope: Deactivated successfully.
Dec 05 10:07:14 compute-0 podman[254135]: 2025-12-05 10:07:14.992551824 +0000 UTC m=+0.160496333 container died 7c4b69b7eca68b6d91897b132fd94cb76ff87e90ebb0536d1508819ada977b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 05 10:07:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c8cf90117206a29bba348a88aa3fb840cd191b344c2a370297613a5c093228c-merged.mount: Deactivated successfully.
Dec 05 10:07:15 compute-0 podman[254135]: 2025-12-05 10:07:15.025830895 +0000 UTC m=+0.193775394 container remove 7c4b69b7eca68b6d91897b132fd94cb76ff87e90ebb0536d1508819ada977b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:07:15 compute-0 systemd[1]: libpod-conmon-7c4b69b7eca68b6d91897b132fd94cb76ff87e90ebb0536d1508819ada977b40.scope: Deactivated successfully.
Dec 05 10:07:15 compute-0 podman[254182]: 2025-12-05 10:07:15.21858342 +0000 UTC m=+0.051314432 container create 2850268b36cecf063ef63179d189f722bcca264d720161e9168addef90f42ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Dec 05 10:07:15 compute-0 systemd[1]: Started libpod-conmon-2850268b36cecf063ef63179d189f722bcca264d720161e9168addef90f42ca5.scope.
Dec 05 10:07:15 compute-0 podman[254182]: 2025-12-05 10:07:15.191509627 +0000 UTC m=+0.024240639 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:07:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666ca0926fa1cdf1dd66bde609b4aa49c4e03d9e669625d2c292fca354aa56f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666ca0926fa1cdf1dd66bde609b4aa49c4e03d9e669625d2c292fca354aa56f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666ca0926fa1cdf1dd66bde609b4aa49c4e03d9e669625d2c292fca354aa56f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666ca0926fa1cdf1dd66bde609b4aa49c4e03d9e669625d2c292fca354aa56f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:15 compute-0 podman[254182]: 2025-12-05 10:07:15.312850156 +0000 UTC m=+0.145581198 container init 2850268b36cecf063ef63179d189f722bcca264d720161e9168addef90f42ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaplygin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:07:15 compute-0 podman[254182]: 2025-12-05 10:07:15.326390923 +0000 UTC m=+0.159121935 container start 2850268b36cecf063ef63179d189f722bcca264d720161e9168addef90f42ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaplygin, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:07:15 compute-0 podman[254182]: 2025-12-05 10:07:15.329911448 +0000 UTC m=+0.162642480 container attach 2850268b36cecf063ef63179d189f722bcca264d720161e9168addef90f42ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 10:07:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:15.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:15] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:07:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:15] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]: {
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:     "1": [
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:         {
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             "devices": [
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "/dev/loop3"
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             ],
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             "lv_name": "ceph_lv0",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             "lv_size": "21470642176",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             "name": "ceph_lv0",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             "tags": {
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.cluster_name": "ceph",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.crush_device_class": "",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.encrypted": "0",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.osd_id": "1",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.type": "block",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.vdo": "0",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:                 "ceph.with_tpm": "0"
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             },
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             "type": "block",
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:             "vg_name": "ceph_vg0"
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:         }
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]:     ]
Dec 05 10:07:15 compute-0 gifted_chaplygin[254198]: }
Dec 05 10:07:15 compute-0 systemd[1]: libpod-2850268b36cecf063ef63179d189f722bcca264d720161e9168addef90f42ca5.scope: Deactivated successfully.
Dec 05 10:07:15 compute-0 podman[254182]: 2025-12-05 10:07:15.690907824 +0000 UTC m=+0.523638856 container died 2850268b36cecf063ef63179d189f722bcca264d720161e9168addef90f42ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:07:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:16 compute-0 ceph-mon[74418]: pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:07:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-666ca0926fa1cdf1dd66bde609b4aa49c4e03d9e669625d2c292fca354aa56f8-merged.mount: Deactivated successfully.
Dec 05 10:07:16 compute-0 podman[254182]: 2025-12-05 10:07:16.246622157 +0000 UTC m=+1.079353169 container remove 2850268b36cecf063ef63179d189f722bcca264d720161e9168addef90f42ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:16 compute-0 systemd[1]: libpod-conmon-2850268b36cecf063ef63179d189f722bcca264d720161e9168addef90f42ca5.scope: Deactivated successfully.
Dec 05 10:07:16 compute-0 sudo[254067]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:16.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:16 compute-0 sudo[254226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:07:16 compute-0 sudo[254226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:16 compute-0 sudo[254226]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:16 compute-0 sudo[254252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:07:16 compute-0 sudo[254252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:17.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:17 compute-0 ceph-mon[74418]: pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:07:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:17.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:07:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:18.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:18 compute-0 ceph-mon[74418]: pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:07:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:19.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:20.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:07:20.562 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:07:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:07:20.563 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:07:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:07:20.563 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:07:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:07:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:21.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:21 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003c40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:22.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc00a820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:23.575Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:07:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:23.576Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:07:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:23.576Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:23.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:07:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:24.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:07:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:07:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:25.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:25] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:07:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:25] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:07:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 242 B/s rd, 0 op/s
Dec 05 10:07:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:27.288Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:07:27
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.nfs', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.mgr', 'backups']
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:07:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:07:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:07:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:07:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:27.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:07:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:07:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:28 compute-0 ceph-mon[74418]: pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:07:28 compute-0 podman[254367]: 2025-12-05 10:07:28.983112698 +0000 UTC m=+2.649649275 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:07:28 compute-0 podman[254320]: 2025-12-05 10:07:28.993964672 +0000 UTC m=+10.641908443 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 05 10:07:29 compute-0 podman[253977]: 2025-12-05 10:07:29.04371349 +0000 UTC m=+15.516149109 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec 05 10:07:29 compute-0 podman[254403]: 2025-12-05 10:07:29.090813107 +0000 UTC m=+0.049402400 container create a9ddd223bb1a2a53a38cb0b98fcde3f1cb86cc0c8af41b9177219b5eefbf40fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:07:29 compute-0 systemd[1]: Started libpod-conmon-a9ddd223bb1a2a53a38cb0b98fcde3f1cb86cc0c8af41b9177219b5eefbf40fc.scope.
Dec 05 10:07:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:07:29 compute-0 podman[254403]: 2025-12-05 10:07:29.070074675 +0000 UTC m=+0.028663978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:07:29 compute-0 podman[254403]: 2025-12-05 10:07:29.174502086 +0000 UTC m=+0.133091389 container init a9ddd223bb1a2a53a38cb0b98fcde3f1cb86cc0c8af41b9177219b5eefbf40fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:29 compute-0 podman[254403]: 2025-12-05 10:07:29.181724592 +0000 UTC m=+0.140313875 container start a9ddd223bb1a2a53a38cb0b98fcde3f1cb86cc0c8af41b9177219b5eefbf40fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 10:07:29 compute-0 cranky_leakey[254432]: 167 167
Dec 05 10:07:29 compute-0 systemd[1]: libpod-a9ddd223bb1a2a53a38cb0b98fcde3f1cb86cc0c8af41b9177219b5eefbf40fc.scope: Deactivated successfully.
Dec 05 10:07:29 compute-0 podman[254403]: 2025-12-05 10:07:29.190635223 +0000 UTC m=+0.149224506 container attach a9ddd223bb1a2a53a38cb0b98fcde3f1cb86cc0c8af41b9177219b5eefbf40fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 10:07:29 compute-0 podman[254403]: 2025-12-05 10:07:29.191085636 +0000 UTC m=+0.149674909 container died a9ddd223bb1a2a53a38cb0b98fcde3f1cb86cc0c8af41b9177219b5eefbf40fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb19fb9acc951571e68e9d1bd8f7520168755945530061f1ab0f564bd91e91f3-merged.mount: Deactivated successfully.
Dec 05 10:07:29 compute-0 podman[254403]: 2025-12-05 10:07:29.247556596 +0000 UTC m=+0.206145879 container remove a9ddd223bb1a2a53a38cb0b98fcde3f1cb86cc0c8af41b9177219b5eefbf40fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:07:29 compute-0 systemd[1]: libpod-conmon-a9ddd223bb1a2a53a38cb0b98fcde3f1cb86cc0c8af41b9177219b5eefbf40fc.scope: Deactivated successfully.
Dec 05 10:07:29 compute-0 podman[254444]: 2025-12-05 10:07:29.26133029 +0000 UTC m=+0.089517648 container create 233399513c901968d74261d76525adab10fdd128fd7c05d28b54d1d0ebfa3c62 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=nova_compute_init, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:07:29 compute-0 podman[254444]: 2025-12-05 10:07:29.220182124 +0000 UTC m=+0.048369502 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec 05 10:07:29 compute-0 python3[253917]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 05 10:07:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:07:29 compute-0 sudo[253914]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:29 compute-0 podman[254506]: 2025-12-05 10:07:29.427693099 +0000 UTC m=+0.051324552 container create 974c58cab45eb472f0f32816285d5a85c12d6b145b32d741101575974c82e2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ride, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:07:29 compute-0 systemd[1]: Started libpod-conmon-974c58cab45eb472f0f32816285d5a85c12d6b145b32d741101575974c82e2ad.scope.
Dec 05 10:07:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:07:29 compute-0 podman[254506]: 2025-12-05 10:07:29.403205306 +0000 UTC m=+0.026836589 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b5c0f719979582016f8b9e7faeb1bd826252b68c9a9d74a44f2efa0bf8081e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b5c0f719979582016f8b9e7faeb1bd826252b68c9a9d74a44f2efa0bf8081e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b5c0f719979582016f8b9e7faeb1bd826252b68c9a9d74a44f2efa0bf8081e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b5c0f719979582016f8b9e7faeb1bd826252b68c9a9d74a44f2efa0bf8081e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:29 compute-0 podman[254506]: 2025-12-05 10:07:29.519133738 +0000 UTC m=+0.142765021 container init 974c58cab45eb472f0f32816285d5a85c12d6b145b32d741101575974c82e2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 10:07:29 compute-0 podman[254506]: 2025-12-05 10:07:29.527516195 +0000 UTC m=+0.151147448 container start 974c58cab45eb472f0f32816285d5a85c12d6b145b32d741101575974c82e2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ride, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:07:29 compute-0 podman[254506]: 2025-12-05 10:07:29.537457075 +0000 UTC m=+0.161088348 container attach 974c58cab45eb472f0f32816285d5a85c12d6b145b32d741101575974c82e2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:07:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:29.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:29 compute-0 sudo[254566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:07:29 compute-0 sudo[254566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:29 compute-0 sudo[254566]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:30 compute-0 ceph-mon[74418]: pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:30 compute-0 ceph-mon[74418]: pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:07:30 compute-0 ceph-mon[74418]: pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 242 B/s rd, 0 op/s
Dec 05 10:07:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:07:30 compute-0 lvm[254646]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:07:30 compute-0 lvm[254646]: VG ceph_vg0 finished
Dec 05 10:07:30 compute-0 priceless_ride[254522]: {}
Dec 05 10:07:30 compute-0 systemd[1]: libpod-974c58cab45eb472f0f32816285d5a85c12d6b145b32d741101575974c82e2ad.scope: Deactivated successfully.
Dec 05 10:07:30 compute-0 podman[254506]: 2025-12-05 10:07:30.312332089 +0000 UTC m=+0.935963332 container died 974c58cab45eb472f0f32816285d5a85c12d6b145b32d741101575974c82e2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ride, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 05 10:07:30 compute-0 systemd[1]: libpod-974c58cab45eb472f0f32816285d5a85c12d6b145b32d741101575974c82e2ad.scope: Consumed 1.175s CPU time.
Dec 05 10:07:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:07:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-52b5c0f719979582016f8b9e7faeb1bd826252b68c9a9d74a44f2efa0bf8081e-merged.mount: Deactivated successfully.
Dec 05 10:07:30 compute-0 podman[254506]: 2025-12-05 10:07:30.478441772 +0000 UTC m=+1.102073025 container remove 974c58cab45eb472f0f32816285d5a85c12d6b145b32d741101575974c82e2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ride, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 05 10:07:30 compute-0 sudo[254252]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:07:30 compute-0 systemd[1]: libpod-conmon-974c58cab45eb472f0f32816285d5a85c12d6b145b32d741101575974c82e2ad.scope: Deactivated successfully.
Dec 05 10:07:30 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:07:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:07:30 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:07:30 compute-0 sudo[254666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:07:30 compute-0 sudo[254666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:30 compute-0 sudo[254666]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:07:31 compute-0 ceph-mon[74418]: pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:07:31 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:07:31 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:07:31 compute-0 sudo[254816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbnyikolyidzakjriuikyqsgvymkpwaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929251.1831248-4009-122613584413431/AnsiballZ_stat.py'
Dec 05 10:07:31 compute-0 sudo[254816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:31 compute-0 podman[254818]: 2025-12-05 10:07:31.590022954 +0000 UTC m=+0.115914033 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:07:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:31.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:31 compute-0 python3.9[254819]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:07:31 compute-0 sudo[254816]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:07:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:07:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:07:33 compute-0 ceph-mon[74418]: pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.095859) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929253096006, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 882, "num_deletes": 251, "total_data_size": 1530190, "memory_usage": 1553640, "flush_reason": "Manual Compaction"}
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929253114294, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1489229, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19250, "largest_seqno": 20131, "table_properties": {"data_size": 1484753, "index_size": 2128, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9673, "raw_average_key_size": 19, "raw_value_size": 1475865, "raw_average_value_size": 2981, "num_data_blocks": 93, "num_entries": 495, "num_filter_entries": 495, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929170, "oldest_key_time": 1764929170, "file_creation_time": 1764929253, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 18689 microseconds, and 5724 cpu microseconds.
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.114499) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1489229 bytes OK
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.114642) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.118354) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.118372) EVENT_LOG_v1 {"time_micros": 1764929253118367, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.118390) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1525973, prev total WAL file size 1525973, number of live WAL files 2.
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.119517) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1454KB)], [41(13MB)]
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929253119655, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15960370, "oldest_snapshot_seqno": -1}
Dec 05 10:07:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5113 keys, 13666304 bytes, temperature: kUnknown
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929253314415, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 13666304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13630329, "index_size": 22105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 130613, "raw_average_key_size": 25, "raw_value_size": 13535535, "raw_average_value_size": 2647, "num_data_blocks": 904, "num_entries": 5113, "num_filter_entries": 5113, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764929253, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.314724) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13666304 bytes
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.316318) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.9 rd, 70.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 13.8 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(19.9) write-amplify(9.2) OK, records in: 5629, records dropped: 516 output_compression: NoCompression
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.316334) EVENT_LOG_v1 {"time_micros": 1764929253316326, "job": 20, "event": "compaction_finished", "compaction_time_micros": 194839, "compaction_time_cpu_micros": 48125, "output_level": 6, "num_output_files": 1, "total_output_size": 13666304, "num_input_records": 5629, "num_output_records": 5113, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929253316693, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929253319154, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.119420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.319205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.319210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.319211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.319213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:07:33 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:07:33.319214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:07:33 compute-0 sudo[254999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btqakeefsmjxlybizxqzvvfzvfdcwwdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929253.0604374-4045-250742183525499/AnsiballZ_container_config_data.py'
Dec 05 10:07:33 compute-0 sudo[254999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:33.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:33 compute-0 python3.9[255001]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 05 10:07:33 compute-0 sudo[254999]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:33.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:34 compute-0 sudo[255152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxybnzaygffozfhulnsqtdjqcpemphxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929253.9350467-4072-259005168586788/AnsiballZ_container_config_hash.py'
Dec 05 10:07:34 compute-0 sudo[255152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:34.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:34 compute-0 python3.9[255154]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 10:07:34 compute-0 sudo[255152]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:35 compute-0 ceph-mon[74418]: pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:07:35 compute-0 sudo[255305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llabsarlxfctnsxwopkhtvhasdqxtzad ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764929254.85815-4102-106265266932794/AnsiballZ_edpm_container_manage.py'
Dec 05 10:07:35 compute-0 sudo[255305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003ca0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Dec 05 10:07:35 compute-0 python3[255307]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 10:07:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:07:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:35.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:07:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:35] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:07:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:35] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:07:35 compute-0 podman[255347]: 2025-12-05 10:07:35.683039953 +0000 UTC m=+0.051514197 container create 3505394ad9c216255099f5108dfcd9c7a21dad336485e63533e7fa33170917a7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 05 10:07:35 compute-0 podman[255347]: 2025-12-05 10:07:35.657932882 +0000 UTC m=+0.026407146 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec 05 10:07:35 compute-0 python3[255307]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5 kolla_start
Dec 05 10:07:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:35 compute-0 sudo[255305]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:36 compute-0 sudo[255536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlwophjdytckfydxduttyfdofdusuiuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929256.0498357-4126-191056854522547/AnsiballZ_stat.py'
Dec 05 10:07:36 compute-0 sudo[255536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:36.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:36 compute-0 python3.9[255538]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:07:36 compute-0 sudo[255536]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:37 compute-0 ceph-mon[74418]: pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Dec 05 10:07:37 compute-0 sudo[255691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnkdvvsjikiyrfdbcjpdvtborxnpyyvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929256.9073267-4153-148067397029605/AnsiballZ_file.py'
Dec 05 10:07:37 compute-0 sudo[255691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:37.289Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:07:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:37.289Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Dec 05 10:07:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:07:37 compute-0 python3.9[255693]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:07:37 compute-0 sudo[255691]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:38 compute-0 sudo[255844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mylirmjusohjkppvpdeyxgrxgcnjnelb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929257.431246-4153-173066072216315/AnsiballZ_copy.py'
Dec 05 10:07:38 compute-0 sudo[255844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:38 compute-0 python3.9[255846]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764929257.431246-4153-173066072216315/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 10:07:38 compute-0 sudo[255844]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:38.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:38 compute-0 sudo[255922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npjubawlbkyfxwormuxyyzyldogvoshj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929257.431246-4153-173066072216315/AnsiballZ_systemd.py'
Dec 05 10:07:38 compute-0 sudo[255922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:38 compute-0 python3.9[255924]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 10:07:38 compute-0 systemd[1]: Reloading.
Dec 05 10:07:38 compute-0 systemd-rc-local-generator[255944]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:07:38 compute-0 systemd-sysv-generator[255950]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:07:39 compute-0 sudo[255922]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:39 compute-0 ceph-mon[74418]: pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Dec 05 10:07:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 0 B/s wr, 97 op/s
Dec 05 10:07:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:39 compute-0 sudo[256032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjjoitwbksagntneeazywgfpcrabagtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929257.431246-4153-173066072216315/AnsiballZ_systemd.py'
Dec 05 10:07:39 compute-0 sudo[256032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:39.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:39 compute-0 python3.9[256034]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 10:07:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:39 compute-0 systemd[1]: Reloading.
Dec 05 10:07:39 compute-0 systemd-sysv-generator[256067]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 10:07:39 compute-0 systemd-rc-local-generator[256063]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 10:07:40 compute-0 systemd[1]: Starting nova_compute container...
Dec 05 10:07:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18b14667ea017c18a04eb1540cd9ff430740e4f424d492d022a8b518f95f6c4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18b14667ea017c18a04eb1540cd9ff430740e4f424d492d022a8b518f95f6c4/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18b14667ea017c18a04eb1540cd9ff430740e4f424d492d022a8b518f95f6c4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18b14667ea017c18a04eb1540cd9ff430740e4f424d492d022a8b518f95f6c4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18b14667ea017c18a04eb1540cd9ff430740e4f424d492d022a8b518f95f6c4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:40 compute-0 podman[256074]: 2025-12-05 10:07:40.288813583 +0000 UTC m=+0.114013911 container init 3505394ad9c216255099f5108dfcd9c7a21dad336485e63533e7fa33170917a7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:07:40 compute-0 podman[256074]: 2025-12-05 10:07:40.295066593 +0000 UTC m=+0.120266891 container start 3505394ad9c216255099f5108dfcd9c7a21dad336485e63533e7fa33170917a7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125)
Dec 05 10:07:40 compute-0 podman[256074]: nova_compute
Dec 05 10:07:40 compute-0 nova_compute[256089]: + sudo -E kolla_set_configs
Dec 05 10:07:40 compute-0 systemd[1]: Started nova_compute container.
Dec 05 10:07:40 compute-0 sudo[256032]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Validating config file
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying service configuration files
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Deleting /etc/ceph
Dec 05 10:07:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Creating directory /etc/ceph
Dec 05 10:07:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:40.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /etc/ceph
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Writing out command to execute
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 10:07:40 compute-0 nova_compute[256089]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 10:07:40 compute-0 nova_compute[256089]: ++ cat /run_command
Dec 05 10:07:40 compute-0 nova_compute[256089]: + CMD=nova-compute
Dec 05 10:07:40 compute-0 nova_compute[256089]: + ARGS=
Dec 05 10:07:40 compute-0 nova_compute[256089]: + sudo kolla_copy_cacerts
Dec 05 10:07:40 compute-0 nova_compute[256089]: + [[ ! -n '' ]]
Dec 05 10:07:40 compute-0 nova_compute[256089]: + . kolla_extend_start
Dec 05 10:07:40 compute-0 nova_compute[256089]: Running command: 'nova-compute'
Dec 05 10:07:40 compute-0 nova_compute[256089]: + echo 'Running command: '\''nova-compute'\'''
Dec 05 10:07:40 compute-0 nova_compute[256089]: + umask 0022
Dec 05 10:07:40 compute-0 nova_compute[256089]: + exec nova-compute
Dec 05 10:07:41 compute-0 ceph-mon[74418]: pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 0 B/s wr, 97 op/s
Dec 05 10:07:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 10:07:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:41 compute-0 python3.9[256252]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:07:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:07:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:07:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:07:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:42.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:07:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:07:42 compute-0 python3.9[256404]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:07:42 compute-0 nova_compute[256089]: 2025-12-05 10:07:42.947 256093 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 10:07:42 compute-0 nova_compute[256089]: 2025-12-05 10:07:42.947 256093 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 10:07:42 compute-0 nova_compute[256089]: 2025-12-05 10:07:42.947 256093 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 10:07:42 compute-0 nova_compute[256089]: 2025-12-05 10:07:42.948 256093 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 05 10:07:43 compute-0 nova_compute[256089]: 2025-12-05 10:07:43.100 256093 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:07:43 compute-0 nova_compute[256089]: 2025-12-05 10:07:43.116 256093 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:07:43 compute-0 nova_compute[256089]: 2025-12-05 10:07:43.116 256093 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 05 10:07:43 compute-0 ceph-mon[74418]: pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 10:07:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:07:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 10:07:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:43.578Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:43.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:43 compute-0 python3.9[256558]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 10:07:43 compute-0 nova_compute[256089]: 2025-12-05 10:07:43.917 256093 INFO nova.virt.driver [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.071 256093 INFO nova.compute.provider_config [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.137 256093 DEBUG oslo_concurrency.lockutils [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.137 256093 DEBUG oslo_concurrency.lockutils [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.137 256093 DEBUG oslo_concurrency.lockutils [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.138 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.138 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.138 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.139 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.139 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.139 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.139 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.139 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.140 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.140 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.140 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.140 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.140 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.141 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.141 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.141 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.141 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.141 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.142 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.142 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.142 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.142 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.142 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.143 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.143 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.143 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.143 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.143 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.144 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.144 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.144 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.144 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.144 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.145 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.145 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.145 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.145 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.146 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.146 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.146 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.146 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.147 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.147 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.147 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.147 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.148 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.148 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.148 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.148 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.149 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.149 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.149 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.149 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.149 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.150 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.150 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.150 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.150 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.150 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.151 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.151 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.151 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.151 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.151 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.152 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.152 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.152 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.152 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.152 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.153 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.153 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.153 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.153 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.153 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.154 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.154 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.154 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.154 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.154 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.155 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.155 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.155 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.155 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.155 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.156 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.156 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.156 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.156 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.157 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.157 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.157 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.157 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.157 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.158 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.158 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.158 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.158 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.158 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.159 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.159 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.159 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.159 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.159 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.160 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.160 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.160 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.160 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.161 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.161 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.161 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.161 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.161 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.162 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.162 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.162 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.162 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.162 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.163 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.163 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.163 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.163 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.163 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.164 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.164 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.164 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.164 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.164 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.164 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.165 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.165 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.165 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.165 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.165 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.166 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.166 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.166 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.166 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.166 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.167 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.167 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.167 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.167 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.167 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.168 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.168 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.168 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.168 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.168 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.168 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.169 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.169 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.169 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.169 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.169 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.170 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.170 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.170 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.170 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.171 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.171 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.171 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.171 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.171 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.171 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.172 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.172 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.172 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.172 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.172 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.173 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.173 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.173 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.173 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.173 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.174 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.174 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.174 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.174 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.174 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.175 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.175 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.175 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.175 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.176 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.176 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.176 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.176 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.176 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.176 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.177 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.177 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.177 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.177 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.177 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.178 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.178 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.178 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.178 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.178 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.179 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.179 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.179 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.179 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.179 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.180 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.180 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.180 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.180 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.180 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.181 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.181 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.181 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.181 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.181 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.182 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.182 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.182 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.182 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.182 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.182 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.183 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.183 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.183 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.183 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.183 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.184 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.184 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.184 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.184 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.184 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.184 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.184 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.185 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.185 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.185 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.185 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.185 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.185 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.185 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.186 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.186 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.186 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.186 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.186 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.186 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.186 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.187 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.187 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.187 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.187 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.187 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.187 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.187 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.188 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.188 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.188 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.188 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.188 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.188 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.188 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.188 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.189 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.189 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.189 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.189 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.189 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.189 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.189 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.190 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.190 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.190 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.190 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.190 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.190 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.190 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.191 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.191 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.191 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.191 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.191 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.191 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.191 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.192 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.192 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.192 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.192 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.192 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.192 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.192 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.192 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.193 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.193 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.193 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.193 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.193 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.193 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.193 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.194 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.194 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.194 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.194 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.194 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.194 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.194 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.195 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.195 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.195 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.195 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.195 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.195 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.195 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.196 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.196 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.196 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.196 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.196 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.196 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.196 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.197 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.197 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.197 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.197 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.197 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.197 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.197 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.198 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.198 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.198 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.198 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.198 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.198 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.198 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.199 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.199 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.199 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.199 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.199 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.199 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.199 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.200 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.200 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.200 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.200 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.200 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.200 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.200 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.201 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.201 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.201 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.201 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.201 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.201 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.201 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.202 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.202 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.202 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.202 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.202 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.203 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.203 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.203 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.203 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.203 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.203 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.204 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.204 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.204 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.204 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.204 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.204 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.204 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.205 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.205 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.205 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.205 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.205 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.205 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.205 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.206 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.206 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.206 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.206 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.206 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.206 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.206 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.207 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.207 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.207 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.207 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.207 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.207 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.207 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.208 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.208 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.208 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.208 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.208 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.208 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.208 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.209 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.209 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.209 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.209 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.209 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.209 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.210 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.210 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.210 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.210 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.210 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.210 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.210 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.211 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.211 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.211 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.211 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.211 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.211 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.212 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.212 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.212 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.212 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.212 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.212 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.212 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.212 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.213 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.213 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.213 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.213 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.213 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.213 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.213 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.214 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.214 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.214 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.214 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.214 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.214 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.215 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.215 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.215 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.215 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.215 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.215 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.215 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.216 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.216 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.216 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.216 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.216 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.216 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.216 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.217 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.217 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.217 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.217 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.217 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.217 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.218 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.218 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.218 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.218 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.218 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.218 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.218 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.219 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.219 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.219 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.219 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.219 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.219 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.220 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.220 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.220 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.220 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.220 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.220 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.220 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.221 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.221 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.221 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.221 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.221 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.222 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.222 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.222 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.222 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.222 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.222 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.222 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.223 256093 WARNING oslo_config.cfg [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 05 10:07:44 compute-0 nova_compute[256089]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 05 10:07:44 compute-0 nova_compute[256089]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 05 10:07:44 compute-0 nova_compute[256089]: and ``live_migration_inbound_addr`` respectively.
Dec 05 10:07:44 compute-0 nova_compute[256089]: ).  Its value may be silently ignored in the future.
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.223 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.223 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.223 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.224 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.224 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.224 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.224 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.224 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.225 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.225 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.225 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.225 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.225 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.226 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.226 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.226 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.226 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.226 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.227 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.rbd_secret_uuid        = 3c63ce0f-5206-59ae-8381-b67d0b6424b5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.227 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.227 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.227 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.228 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.228 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.228 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.228 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.228 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.229 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.229 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.229 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.229 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.229 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.229 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.230 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.230 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.230 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.230 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.231 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.231 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.231 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.231 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.231 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.232 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.232 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.232 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.232 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.232 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.232 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.232 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.233 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.233 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.233 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.233 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.233 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.234 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.234 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.234 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.234 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.234 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.235 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.235 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.235 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.235 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.235 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.235 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.236 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.236 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.236 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.236 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.236 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.236 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.237 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.237 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.237 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.237 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.237 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.237 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.238 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.238 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.238 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.238 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.238 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.238 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.239 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.239 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.239 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.239 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.239 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.239 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.240 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.240 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.240 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.240 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.240 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.240 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.241 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.241 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.241 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.241 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.241 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.241 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.241 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.242 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.242 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.242 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.242 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.242 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.242 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.243 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.243 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.243 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.243 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.243 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.243 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.243 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.244 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.244 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.244 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.244 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.244 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.244 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.245 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.245 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.245 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.245 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.245 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.246 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.246 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.246 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.246 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.246 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.246 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.247 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.247 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.247 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.247 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.247 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.248 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.248 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.248 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.248 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.248 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.249 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.249 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.249 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.249 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.249 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.249 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.250 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.250 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.250 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.250 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.250 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.251 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.251 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.251 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.251 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.251 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.252 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.252 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.252 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.252 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.252 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.252 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.252 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.253 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.253 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.253 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.253 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.253 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.254 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.254 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.254 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.254 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.254 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.255 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.255 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.255 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.255 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.256 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.256 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.256 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.256 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.256 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.257 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.257 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.257 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.257 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.258 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.258 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.258 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.258 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.258 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.259 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.259 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.259 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.259 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.260 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.260 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.260 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.260 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.261 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.261 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.261 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.261 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.261 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.261 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.262 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.262 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.262 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.262 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.262 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.262 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.263 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.263 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.263 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.263 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.263 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.263 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.264 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.264 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.264 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.264 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.264 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.264 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.265 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.265 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.265 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.265 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.265 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.266 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.266 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.266 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.266 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.266 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.266 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.266 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.267 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.267 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.267 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.267 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.267 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.267 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.267 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.268 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.268 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.268 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.268 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.268 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.269 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.269 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.269 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.269 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.269 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.269 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.269 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.270 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.270 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.270 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.270 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.270 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.270 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.271 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.271 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.271 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.271 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.271 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.271 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.272 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.272 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.272 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.272 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.272 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.272 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.272 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.273 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.273 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.273 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.273 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.273 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.273 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.273 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.274 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.274 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.274 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.274 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.274 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.274 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.275 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.275 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.275 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.275 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.275 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.276 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.276 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.276 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.276 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.276 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.277 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.277 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.277 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.277 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.277 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.277 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.277 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.278 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.278 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.278 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.278 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.278 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.278 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.278 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.279 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.279 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.279 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.279 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.279 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.279 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.279 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.280 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.280 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.280 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.280 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.280 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.280 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.281 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.281 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.281 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.281 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.281 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.281 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.281 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.282 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.282 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.282 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.282 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.282 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.282 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.282 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.283 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.283 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.283 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.283 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.283 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.283 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.284 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.284 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.284 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.284 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.284 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.284 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.284 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.285 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.285 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.285 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.285 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.285 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.285 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.285 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.286 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.286 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.286 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.286 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.286 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.287 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.287 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.287 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.287 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.287 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.287 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.288 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.288 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.288 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.288 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.288 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.288 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.288 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.289 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.289 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.289 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.289 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.289 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.289 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.290 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.290 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.290 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.290 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.290 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.290 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.290 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.291 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.291 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.291 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.291 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.291 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.291 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.291 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.292 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.292 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.292 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.292 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.292 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.292 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.292 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.293 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.293 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.293 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.293 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.293 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.293 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.293 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.294 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.294 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.294 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.294 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.294 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.295 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.295 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.295 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.295 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.295 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.295 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.295 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.296 256093 DEBUG oslo_service.service [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.297 256093 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.316 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.317 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.317 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.318 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 05 10:07:44 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 05 10:07:44 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 05 10:07:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.398 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fa7e95f03a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.400 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fa7e95f03a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.401 256093 INFO nova.virt.libvirt.driver [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Connection event '1' reason 'None'
Dec 05 10:07:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:44.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.424 256093 WARNING nova.virt.libvirt.driver [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 05 10:07:44 compute-0 nova_compute[256089]: 2025-12-05 10:07:44.425 256093 DEBUG nova.virt.libvirt.volume.mount [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 05 10:07:44 compute-0 sudo[256762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twvoocymcuqegsdzacbfgmxcmgmuyneg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929264.3795881-4333-20761387449843/AnsiballZ_podman_container.py'
Dec 05 10:07:44 compute-0 sudo[256762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:45 compute-0 python3.9[256764]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.261 256093 INFO nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Libvirt host capabilities <capabilities>
Dec 05 10:07:45 compute-0 nova_compute[256089]: 
Dec 05 10:07:45 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <host>
Dec 05 10:07:45 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <uuid>f275b88f-2c99-47a9-a747-5d8960473fbf</uuid>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <cpu>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <arch>x86_64</arch>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model>EPYC-Rome-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <vendor>AMD</vendor>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <microcode version='16777317'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <signature family='23' model='49' stepping='0'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='x2apic'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='tsc-deadline'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='osxsave'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='hypervisor'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='tsc_adjust'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='spec-ctrl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='stibp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='arch-capabilities'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='cmp_legacy'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='topoext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='virt-ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='lbrv'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='tsc-scale'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='vmcb-clean'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='pause-filter'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='pfthreshold'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='svme-addr-chk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='rdctl-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='skip-l1dfl-vmentry'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='mds-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature name='pschange-mc-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <pages unit='KiB' size='4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <pages unit='KiB' size='2048'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <pages unit='KiB' size='1048576'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </cpu>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <power_management>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <suspend_mem/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </power_management>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <iommu support='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <migration_features>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <live/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <uri_transports>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <uri_transport>tcp</uri_transport>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <uri_transport>rdma</uri_transport>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </uri_transports>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </migration_features>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <topology>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <cells num='1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <cell id='0'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:           <memory unit='KiB'>7864316</memory>
Dec 05 10:07:45 compute-0 nova_compute[256089]:           <pages unit='KiB' size='4'>1966079</pages>
Dec 05 10:07:45 compute-0 nova_compute[256089]:           <pages unit='KiB' size='2048'>0</pages>
Dec 05 10:07:45 compute-0 nova_compute[256089]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 05 10:07:45 compute-0 nova_compute[256089]:           <distances>
Dec 05 10:07:45 compute-0 nova_compute[256089]:             <sibling id='0' value='10'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:           </distances>
Dec 05 10:07:45 compute-0 nova_compute[256089]:           <cpus num='8'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:           </cpus>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         </cell>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </cells>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </topology>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <cache>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </cache>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <secmodel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model>selinux</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <doi>0</doi>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </secmodel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <secmodel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model>dac</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <doi>0</doi>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </secmodel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </host>
Dec 05 10:07:45 compute-0 nova_compute[256089]: 
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <guest>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <os_type>hvm</os_type>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <arch name='i686'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <wordsize>32</wordsize>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <domain type='qemu'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <domain type='kvm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </arch>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <features>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <pae/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <nonpae/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <acpi default='on' toggle='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <apic default='on' toggle='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <cpuselection/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <deviceboot/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <disksnapshot default='on' toggle='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <externalSnapshot/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </features>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </guest>
Dec 05 10:07:45 compute-0 nova_compute[256089]: 
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <guest>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <os_type>hvm</os_type>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <arch name='x86_64'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <wordsize>64</wordsize>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <domain type='qemu'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <domain type='kvm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </arch>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <features>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <acpi default='on' toggle='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <apic default='on' toggle='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <cpuselection/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <deviceboot/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <disksnapshot default='on' toggle='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <externalSnapshot/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </features>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </guest>
Dec 05 10:07:45 compute-0 nova_compute[256089]: 
Dec 05 10:07:45 compute-0 nova_compute[256089]: </capabilities>
Dec 05 10:07:45 compute-0 nova_compute[256089]: 
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.273 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 10:07:45 compute-0 ceph-mon[74418]: pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 10:07:45 compute-0 sudo[256762]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 10:07:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.300 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 05 10:07:45 compute-0 nova_compute[256089]: <domainCapabilities>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <domain>kvm</domain>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <arch>i686</arch>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <vcpu max='4096'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <iothreads supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <os supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <enum name='firmware'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <loader supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>rom</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pflash</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='readonly'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>yes</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>no</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='secure'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>no</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </loader>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </os>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <cpu>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='host-passthrough' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='hostPassthroughMigratable'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>on</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>off</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='maximum' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='maximumMigratable'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>on</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>off</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='host-model' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <vendor>AMD</vendor>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='x2apic'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='hypervisor'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='stibp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='overflow-recov'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='succor'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='lbrv'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc-scale'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='flushbyasid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='pause-filter'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='pfthreshold'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='disable' name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='custom' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Dhyana-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Genoa'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='auto-ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='auto-ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-128'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-256'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-512'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v6'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v7'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='KnightsMill'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512er'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512pf'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='KnightsMill-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512er'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512pf'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G4-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tbm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G5-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tbm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SierraForest'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cmpccxadd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SierraForest-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cmpccxadd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='athlon'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='athlon-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='core2duo'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='core2duo-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='coreduo'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='coreduo-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='n270'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='n270-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='phenom'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='phenom-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </cpu>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <memoryBacking supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <enum name='sourceType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>file</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>anonymous</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>memfd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </memoryBacking>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <devices>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <disk supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='diskDevice'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>disk</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>cdrom</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>floppy</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>lun</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='bus'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>fdc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>scsi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>sata</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-non-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </disk>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <graphics supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vnc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>egl-headless</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dbus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </graphics>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <video supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='modelType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vga</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>cirrus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>none</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>bochs</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>ramfb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </video>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <hostdev supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='mode'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>subsystem</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='startupPolicy'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>default</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>mandatory</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>requisite</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>optional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='subsysType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pci</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>scsi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='capsType'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='pciBackend'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </hostdev>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <rng supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-non-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>random</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>egd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>builtin</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </rng>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <filesystem supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='driverType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>path</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>handle</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtiofs</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </filesystem>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <tpm supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tpm-tis</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tpm-crb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>emulator</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>external</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendVersion'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>2.0</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </tpm>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <redirdev supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='bus'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </redirdev>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <channel supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pty</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>unix</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </channel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <crypto supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>qemu</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>builtin</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </crypto>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <interface supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>default</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>passt</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </interface>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <panic supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>isa</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>hyperv</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </panic>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <console supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>null</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pty</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dev</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>file</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pipe</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>stdio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>udp</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tcp</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>unix</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>qemu-vdagent</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dbus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </console>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </devices>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <features>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <gic supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <vmcoreinfo supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <genid supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <backingStoreInput supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <backup supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <async-teardown supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <ps2 supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <sev supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <sgx supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <hyperv supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='features'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>relaxed</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vapic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>spinlocks</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vpindex</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>runtime</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>synic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>stimer</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>reset</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vendor_id</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>frequencies</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>reenlightenment</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tlbflush</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>ipi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>avic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>emsr_bitmap</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>xmm_input</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <defaults>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <spinlocks>4095</spinlocks>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <stimer_direct>on</stimer_direct>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </defaults>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </hyperv>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <launchSecurity supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='sectype'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tdx</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </launchSecurity>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </features>
Dec 05 10:07:45 compute-0 nova_compute[256089]: </domainCapabilities>
Dec 05 10:07:45 compute-0 nova_compute[256089]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.306 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 05 10:07:45 compute-0 nova_compute[256089]: <domainCapabilities>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <domain>kvm</domain>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <arch>i686</arch>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <vcpu max='240'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <iothreads supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <os supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <enum name='firmware'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <loader supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>rom</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pflash</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='readonly'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>yes</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>no</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='secure'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>no</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </loader>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </os>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <cpu>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='host-passthrough' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='hostPassthroughMigratable'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>on</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>off</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='maximum' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='maximumMigratable'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>on</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>off</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='host-model' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <vendor>AMD</vendor>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='x2apic'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='hypervisor'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='stibp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='overflow-recov'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='succor'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='lbrv'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc-scale'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='flushbyasid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='pause-filter'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='pfthreshold'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='disable' name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='custom' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Dhyana-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Genoa'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='auto-ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='auto-ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-128'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-256'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-512'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v6'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v7'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='KnightsMill'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512er'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512pf'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='KnightsMill-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512er'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512pf'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G4-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tbm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G5-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tbm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SierraForest'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cmpccxadd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SierraForest-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cmpccxadd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='athlon'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='athlon-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='core2duo'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='core2duo-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='coreduo'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='coreduo-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='n270'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='n270-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='phenom'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='phenom-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </cpu>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <memoryBacking supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <enum name='sourceType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>file</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>anonymous</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>memfd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </memoryBacking>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <devices>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <disk supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='diskDevice'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>disk</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>cdrom</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>floppy</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>lun</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='bus'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>ide</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>fdc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>scsi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>sata</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-non-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </disk>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <graphics supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vnc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>egl-headless</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dbus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </graphics>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <video supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='modelType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vga</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>cirrus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>none</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>bochs</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>ramfb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </video>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <hostdev supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='mode'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>subsystem</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='startupPolicy'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>default</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>mandatory</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>requisite</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>optional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='subsysType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pci</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>scsi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='capsType'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='pciBackend'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </hostdev>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <rng supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-non-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>random</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>egd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>builtin</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </rng>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <filesystem supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='driverType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>path</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>handle</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtiofs</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </filesystem>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <tpm supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tpm-tis</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tpm-crb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>emulator</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>external</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendVersion'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>2.0</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </tpm>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <redirdev supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='bus'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </redirdev>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <channel supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pty</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>unix</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </channel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <crypto supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>qemu</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>builtin</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </crypto>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <interface supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>default</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>passt</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </interface>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <panic supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>isa</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>hyperv</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </panic>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <console supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>null</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pty</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dev</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>file</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pipe</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>stdio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>udp</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tcp</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>unix</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>qemu-vdagent</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dbus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </console>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </devices>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <features>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <gic supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <vmcoreinfo supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <genid supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <backingStoreInput supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <backup supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <async-teardown supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <ps2 supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <sev supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <sgx supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <hyperv supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='features'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>relaxed</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vapic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>spinlocks</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vpindex</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>runtime</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>synic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>stimer</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>reset</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vendor_id</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>frequencies</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>reenlightenment</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tlbflush</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>ipi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>avic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>emsr_bitmap</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>xmm_input</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <defaults>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <spinlocks>4095</spinlocks>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <stimer_direct>on</stimer_direct>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </defaults>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </hyperv>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <launchSecurity supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='sectype'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tdx</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </launchSecurity>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </features>
Dec 05 10:07:45 compute-0 nova_compute[256089]: </domainCapabilities>
Dec 05 10:07:45 compute-0 nova_compute[256089]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.334 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.338 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 05 10:07:45 compute-0 nova_compute[256089]: <domainCapabilities>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <domain>kvm</domain>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <arch>x86_64</arch>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <vcpu max='4096'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <iothreads supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <os supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <enum name='firmware'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>efi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <loader supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>rom</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pflash</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='readonly'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>yes</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>no</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='secure'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>yes</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>no</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </loader>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </os>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <cpu>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='host-passthrough' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='hostPassthroughMigratable'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>on</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>off</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='maximum' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='maximumMigratable'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>on</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>off</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='host-model' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <vendor>AMD</vendor>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='x2apic'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='hypervisor'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='stibp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='overflow-recov'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='succor'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='lbrv'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc-scale'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='flushbyasid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='pause-filter'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='pfthreshold'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='disable' name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='custom' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Dhyana-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Genoa'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='auto-ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='auto-ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-128'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-256'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-512'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v6'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v7'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='KnightsMill'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512er'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512pf'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='KnightsMill-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512er'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512pf'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G4-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tbm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G5-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tbm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SierraForest'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cmpccxadd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SierraForest-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cmpccxadd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='athlon'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='athlon-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='core2duo'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='core2duo-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='coreduo'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='coreduo-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='n270'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='n270-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='phenom'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='phenom-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </cpu>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <memoryBacking supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <enum name='sourceType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>file</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>anonymous</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>memfd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </memoryBacking>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <devices>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <disk supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='diskDevice'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>disk</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>cdrom</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>floppy</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>lun</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='bus'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>fdc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>scsi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>sata</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-non-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </disk>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <graphics supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vnc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>egl-headless</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dbus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </graphics>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <video supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='modelType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vga</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>cirrus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>none</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>bochs</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>ramfb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </video>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <hostdev supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='mode'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>subsystem</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='startupPolicy'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>default</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>mandatory</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>requisite</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>optional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='subsysType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pci</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>scsi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='capsType'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='pciBackend'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </hostdev>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <rng supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-non-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>random</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>egd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>builtin</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </rng>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <filesystem supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='driverType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>path</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>handle</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtiofs</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </filesystem>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <tpm supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tpm-tis</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tpm-crb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>emulator</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>external</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendVersion'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>2.0</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </tpm>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <redirdev supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='bus'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </redirdev>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <channel supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pty</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>unix</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </channel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <crypto supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>qemu</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>builtin</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </crypto>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <interface supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>default</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>passt</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </interface>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <panic supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>isa</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>hyperv</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </panic>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <console supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>null</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pty</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dev</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>file</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pipe</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>stdio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>udp</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tcp</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>unix</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>qemu-vdagent</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dbus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </console>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </devices>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <features>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <gic supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <vmcoreinfo supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <genid supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <backingStoreInput supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <backup supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <async-teardown supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <ps2 supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <sev supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <sgx supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <hyperv supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='features'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>relaxed</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vapic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>spinlocks</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vpindex</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>runtime</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>synic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>stimer</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>reset</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vendor_id</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>frequencies</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>reenlightenment</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tlbflush</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>ipi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>avic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>emsr_bitmap</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>xmm_input</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <defaults>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <spinlocks>4095</spinlocks>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <stimer_direct>on</stimer_direct>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </defaults>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </hyperv>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <launchSecurity supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='sectype'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tdx</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </launchSecurity>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </features>
Dec 05 10:07:45 compute-0 nova_compute[256089]: </domainCapabilities>
Dec 05 10:07:45 compute-0 nova_compute[256089]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.399 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 05 10:07:45 compute-0 nova_compute[256089]: <domainCapabilities>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <domain>kvm</domain>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <arch>x86_64</arch>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <vcpu max='240'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <iothreads supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <os supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <enum name='firmware'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <loader supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>rom</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pflash</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='readonly'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>yes</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>no</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='secure'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>no</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </loader>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </os>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <cpu>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='host-passthrough' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='hostPassthroughMigratable'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>on</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>off</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='maximum' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='maximumMigratable'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>on</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>off</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='host-model' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <vendor>AMD</vendor>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='x2apic'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='hypervisor'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='stibp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='overflow-recov'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='succor'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='lbrv'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='tsc-scale'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='flushbyasid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='pause-filter'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='pfthreshold'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <feature policy='disable' name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <mode name='custom' supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Broadwell-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Cooperlake-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Denverton-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Dhyana-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Genoa'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='auto-ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='auto-ibrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Milan-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amd-psfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='stibp-always-on'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-Rome-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='EPYC-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='GraniteRapids-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-128'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-256'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx10-512'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='prefetchiti'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Haswell-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v6'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Icelake-Server-v7'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='IvyBridge-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='KnightsMill'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512er'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512pf'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='KnightsMill-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512er'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512pf'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G4-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tbm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Opteron_G5-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fma4'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tbm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xop'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SapphireRapids-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='amx-tile'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-bf16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-fp16'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bitalg'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrc'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fzrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='la57'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='taa-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xfd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SierraForest'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cmpccxadd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='SierraForest-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ifma'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cmpccxadd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fbsdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='fsrs'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ibrs-all'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mcdt-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pbrsb-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='psdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='serialize'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vaes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Client-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='hle'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='rtm'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Skylake-Server-v5'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512bw'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512cd'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512dq'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512f'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='avx512vl'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='invpcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pcid'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='pku'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='mpx'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v2'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v3'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='core-capability'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='split-lock-detect'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='Snowridge-v4'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='cldemote'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='erms'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='gfni'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdir64b'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='movdiri'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='xsaves'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='athlon'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='athlon-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='core2duo'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='core2duo-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='coreduo'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='coreduo-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='n270'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='n270-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='ss'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='phenom'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <blockers model='phenom-v1'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnow'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <feature name='3dnowext'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </blockers>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </mode>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </cpu>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <memoryBacking supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <enum name='sourceType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>file</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>anonymous</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <value>memfd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </memoryBacking>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <devices>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <disk supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='diskDevice'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>disk</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>cdrom</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>floppy</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>lun</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='bus'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>ide</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>fdc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>scsi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>sata</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-non-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </disk>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <graphics supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vnc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>egl-headless</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dbus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </graphics>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <video supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='modelType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vga</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>cirrus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>none</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>bochs</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>ramfb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </video>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <hostdev supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='mode'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>subsystem</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='startupPolicy'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>default</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>mandatory</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>requisite</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>optional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='subsysType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pci</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>scsi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='capsType'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='pciBackend'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </hostdev>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <rng supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtio-non-transitional</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>random</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>egd</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>builtin</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </rng>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <filesystem supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='driverType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>path</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>handle</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>virtiofs</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </filesystem>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <tpm supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tpm-tis</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tpm-crb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>emulator</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>external</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendVersion'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>2.0</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </tpm>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <redirdev supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='bus'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>usb</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </redirdev>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <channel supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pty</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>unix</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </channel>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <crypto supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>qemu</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendModel'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>builtin</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </crypto>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <interface supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='backendType'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>default</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>passt</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </interface>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <panic supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='model'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>isa</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>hyperv</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </panic>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <console supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='type'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>null</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vc</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pty</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dev</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>file</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>pipe</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>stdio</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>udp</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tcp</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>unix</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>qemu-vdagent</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>dbus</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </console>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </devices>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   <features>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <gic supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <vmcoreinfo supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <genid supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <backingStoreInput supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <backup supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <async-teardown supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <ps2 supported='yes'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <sev supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <sgx supported='no'/>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <hyperv supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='features'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>relaxed</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vapic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>spinlocks</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vpindex</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>runtime</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>synic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>stimer</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>reset</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>vendor_id</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>frequencies</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>reenlightenment</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tlbflush</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>ipi</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>avic</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>emsr_bitmap</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>xmm_input</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <defaults>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <spinlocks>4095</spinlocks>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <stimer_direct>on</stimer_direct>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </defaults>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </hyperv>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     <launchSecurity supported='yes'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       <enum name='sectype'>
Dec 05 10:07:45 compute-0 nova_compute[256089]:         <value>tdx</value>
Dec 05 10:07:45 compute-0 nova_compute[256089]:       </enum>
Dec 05 10:07:45 compute-0 nova_compute[256089]:     </launchSecurity>
Dec 05 10:07:45 compute-0 nova_compute[256089]:   </features>
Dec 05 10:07:45 compute-0 nova_compute[256089]: </domainCapabilities>
Dec 05 10:07:45 compute-0 nova_compute[256089]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.463 256093 DEBUG nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.464 256093 INFO nova.virt.libvirt.host [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Secure Boot support detected
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.466 256093 INFO nova.virt.libvirt.driver [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.466 256093 INFO nova.virt.libvirt.driver [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.475 256093 DEBUG nova.virt.libvirt.driver [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.503 256093 INFO nova.virt.node [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Determined node identity bad8518e-442e-4fc2-b7f3-2c453f1840d6 from /var/lib/nova/compute_id
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.526 256093 WARNING nova.compute.manager [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Compute nodes ['bad8518e-442e-4fc2-b7f3-2c453f1840d6'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.563 256093 INFO nova.compute.manager [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.619 256093 WARNING nova.compute.manager [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.620 256093 DEBUG oslo_concurrency.lockutils [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.620 256093 DEBUG oslo_concurrency.lockutils [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.620 256093 DEBUG oslo_concurrency.lockutils [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.620 256093 DEBUG nova.compute.resource_tracker [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:07:45 compute-0 nova_compute[256089]: 2025-12-05 10:07:45.621 256093 DEBUG oslo_concurrency.processutils [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:07:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:45] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:07:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:45] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:07:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:45.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:45 compute-0 sudo[256968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szukoebdnzjcdndujqvkomvhkegapkqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929265.6227932-4357-170112297585501/AnsiballZ_systemd.py'
Dec 05 10:07:45 compute-0 sudo[256968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:07:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/893981376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:46 compute-0 nova_compute[256089]: 2025-12-05 10:07:46.079 256093 DEBUG oslo_concurrency.processutils [None req-24e6c136-2e18-4d9c-a6bb-21fb16873d35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:07:46 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 05 10:07:46 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 05 10:07:46 compute-0 python3.9[256970]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 10:07:46 compute-0 systemd[1]: Stopping nova_compute container...
Dec 05 10:07:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1358503223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/893981376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1728650855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:46 compute-0 nova_compute[256089]: 2025-12-05 10:07:46.313 256093 DEBUG oslo_concurrency.lockutils [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:07:46 compute-0 nova_compute[256089]: 2025-12-05 10:07:46.313 256093 DEBUG oslo_concurrency.lockutils [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:07:46 compute-0 nova_compute[256089]: 2025-12-05 10:07:46.313 256093 DEBUG oslo_concurrency.lockutils [None req-665cd2ea-e8cf-4f47-935b-67ba00339c26 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:07:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:46.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:46 compute-0 virtqemud[256610]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 05 10:07:46 compute-0 virtqemud[256610]: hostname: compute-0
Dec 05 10:07:46 compute-0 virtqemud[256610]: End of file while reading data: Input/output error
Dec 05 10:07:46 compute-0 systemd[1]: libpod-3505394ad9c216255099f5108dfcd9c7a21dad336485e63533e7fa33170917a7.scope: Deactivated successfully.
Dec 05 10:07:46 compute-0 systemd[1]: libpod-3505394ad9c216255099f5108dfcd9c7a21dad336485e63533e7fa33170917a7.scope: Consumed 3.990s CPU time.
Dec 05 10:07:46 compute-0 podman[257000]: 2025-12-05 10:07:46.781475531 +0000 UTC m=+0.517346635 container died 3505394ad9c216255099f5108dfcd9c7a21dad336485e63533e7fa33170917a7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Dec 05 10:07:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3505394ad9c216255099f5108dfcd9c7a21dad336485e63533e7fa33170917a7-userdata-shm.mount: Deactivated successfully.
Dec 05 10:07:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f18b14667ea017c18a04eb1540cd9ff430740e4f424d492d022a8b518f95f6c4-merged.mount: Deactivated successfully.
Dec 05 10:07:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:47.290Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:07:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:47.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:48.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:07:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:49 compute-0 ceph-mon[74418]: pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Dec 05 10:07:49 compute-0 podman[257000]: 2025-12-05 10:07:49.47742255 +0000 UTC m=+3.213293654 container cleanup 3505394ad9c216255099f5108dfcd9c7a21dad336485e63533e7fa33170917a7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:07:49 compute-0 podman[257000]: nova_compute
Dec 05 10:07:49 compute-0 podman[257034]: nova_compute
Dec 05 10:07:49 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 05 10:07:49 compute-0 systemd[1]: Stopped nova_compute container.
Dec 05 10:07:49 compute-0 systemd[1]: Starting nova_compute container...
Dec 05 10:07:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:49.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:07:49 compute-0 sudo[257060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18b14667ea017c18a04eb1540cd9ff430740e4f424d492d022a8b518f95f6c4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18b14667ea017c18a04eb1540cd9ff430740e4f424d492d022a8b518f95f6c4/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18b14667ea017c18a04eb1540cd9ff430740e4f424d492d022a8b518f95f6c4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18b14667ea017c18a04eb1540cd9ff430740e4f424d492d022a8b518f95f6c4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18b14667ea017c18a04eb1540cd9ff430740e4f424d492d022a8b518f95f6c4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:50 compute-0 sudo[257060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:07:50 compute-0 sudo[257060]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:50 compute-0 podman[257047]: 2025-12-05 10:07:50.166748446 +0000 UTC m=+0.573292141 container init 3505394ad9c216255099f5108dfcd9c7a21dad336485e63533e7fa33170917a7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Dec 05 10:07:50 compute-0 podman[257047]: 2025-12-05 10:07:50.174705691 +0000 UTC m=+0.581249376 container start 3505394ad9c216255099f5108dfcd9c7a21dad336485e63533e7fa33170917a7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 05 10:07:50 compute-0 podman[257047]: nova_compute
Dec 05 10:07:50 compute-0 nova_compute[257087]: + sudo -E kolla_set_configs
Dec 05 10:07:50 compute-0 systemd[1]: Started nova_compute container.
Dec 05 10:07:50 compute-0 sudo[256968]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Validating config file
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying service configuration files
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Deleting /etc/ceph
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Creating directory /etc/ceph
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /etc/ceph
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Writing out command to execute
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 10:07:50 compute-0 nova_compute[257087]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 10:07:50 compute-0 nova_compute[257087]: ++ cat /run_command
Dec 05 10:07:50 compute-0 nova_compute[257087]: + CMD=nova-compute
Dec 05 10:07:50 compute-0 nova_compute[257087]: + ARGS=
Dec 05 10:07:50 compute-0 nova_compute[257087]: + sudo kolla_copy_cacerts
Dec 05 10:07:50 compute-0 nova_compute[257087]: Running command: 'nova-compute'
Dec 05 10:07:50 compute-0 nova_compute[257087]: + [[ ! -n '' ]]
Dec 05 10:07:50 compute-0 nova_compute[257087]: + . kolla_extend_start
Dec 05 10:07:50 compute-0 nova_compute[257087]: + echo 'Running command: '\''nova-compute'\'''
Dec 05 10:07:50 compute-0 nova_compute[257087]: + umask 0022
Dec 05 10:07:50 compute-0 nova_compute[257087]: + exec nova-compute
Dec 05 10:07:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:50.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:50 compute-0 ceph-mon[74418]: pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:50 compute-0 ceph-mon[74418]: pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:07:50 compute-0 sudo[257252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euluixthgonjbfruqznlqjqwocvvrlyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764929270.5075376-4384-20176546907991/AnsiballZ_podman_container.py'
Dec 05 10:07:50 compute-0 sudo[257252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:07:51 compute-0 python3.9[257254]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 05 10:07:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:51 compute-0 systemd[1]: Started libpod-conmon-233399513c901968d74261d76525adab10fdd128fd7c05d28b54d1d0ebfa3c62.scope.
Dec 05 10:07:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630c088e0ba058054aa204c1e93463532867eac41335b9cb08e5ad30fd67c4fe/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630c088e0ba058054aa204c1e93463532867eac41335b9cb08e5ad30fd67c4fe/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630c088e0ba058054aa204c1e93463532867eac41335b9cb08e5ad30fd67c4fe/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:51 compute-0 podman[257279]: 2025-12-05 10:07:51.480742245 +0000 UTC m=+0.221914327 container init 233399513c901968d74261d76525adab10fdd128fd7c05d28b54d1d0ebfa3c62 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:07:51 compute-0 podman[257279]: 2025-12-05 10:07:51.489115512 +0000 UTC m=+0.230287594 container start 233399513c901968d74261d76525adab10fdd128fd7c05d28b54d1d0ebfa3c62 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Dec 05 10:07:51 compute-0 python3.9[257254]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Applying nova statedir ownership
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 05 10:07:51 compute-0 nova_compute_init[257300]: INFO:nova_statedir:Nova statedir ownership complete
Dec 05 10:07:51 compute-0 systemd[1]: libpod-233399513c901968d74261d76525adab10fdd128fd7c05d28b54d1d0ebfa3c62.scope: Deactivated successfully.
Dec 05 10:07:51 compute-0 podman[257301]: 2025-12-05 10:07:51.573586302 +0000 UTC m=+0.048030423 container died 233399513c901968d74261d76525adab10fdd128fd7c05d28b54d1d0ebfa3c62 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-233399513c901968d74261d76525adab10fdd128fd7c05d28b54d1d0ebfa3c62-userdata-shm.mount: Deactivated successfully.
Dec 05 10:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-630c088e0ba058054aa204c1e93463532867eac41335b9cb08e5ad30fd67c4fe-merged.mount: Deactivated successfully.
Dec 05 10:07:51 compute-0 podman[257309]: 2025-12-05 10:07:51.630457093 +0000 UTC m=+0.074363287 container cleanup 233399513c901968d74261d76525adab10fdd128fd7c05d28b54d1d0ebfa3c62 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:51 compute-0 systemd[1]: libpod-conmon-233399513c901968d74261d76525adab10fdd128fd7c05d28b54d1d0ebfa3c62.scope: Deactivated successfully.
Dec 05 10:07:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:51.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:51 compute-0 sudo[257252]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:51 compute-0 ceph-mon[74418]: pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:52 compute-0 sshd-session[226185]: Connection closed by 192.168.122.30 port 51968
Dec 05 10:07:52 compute-0 sshd-session[226179]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:07:52 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Dec 05 10:07:52 compute-0 systemd[1]: session-54.scope: Consumed 2min 24.739s CPU time.
Dec 05 10:07:52 compute-0 systemd-logind[789]: Session 54 logged out. Waiting for processes to exit.
Dec 05 10:07:52 compute-0 systemd-logind[789]: Removed session 54.
Dec 05 10:07:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:07:52 compute-0 nova_compute[257087]: 2025-12-05 10:07:52.352 257094 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 10:07:52 compute-0 nova_compute[257087]: 2025-12-05 10:07:52.352 257094 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 10:07:52 compute-0 nova_compute[257087]: 2025-12-05 10:07:52.352 257094 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 10:07:52 compute-0 nova_compute[257087]: 2025-12-05 10:07:52.352 257094 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 05 10:07:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:07:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:52.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:07:52 compute-0 nova_compute[257087]: 2025-12-05 10:07:52.490 257094 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:07:52 compute-0 nova_compute[257087]: 2025-12-05 10:07:52.519 257094 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:07:52 compute-0 nova_compute[257087]: 2025-12-05 10:07:52.519 257094 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 05 10:07:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3043008423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.065 257094 INFO nova.virt.driver [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.214 257094 INFO nova.compute.provider_config [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.222 257094 DEBUG oslo_concurrency.lockutils [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.223 257094 DEBUG oslo_concurrency.lockutils [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.223 257094 DEBUG oslo_concurrency.lockutils [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.223 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.223 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.224 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.224 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.224 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.224 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.224 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.224 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.225 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.225 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.225 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.225 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.225 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.225 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.226 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.226 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.226 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.226 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.226 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.226 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.226 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.227 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.227 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.227 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.227 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.227 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.227 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.228 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.228 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.228 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.228 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.228 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.228 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.228 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.229 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.229 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.229 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.229 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.229 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.229 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.230 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.230 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.230 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.230 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.230 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.231 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.231 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.231 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.231 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.231 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.231 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.231 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.232 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.232 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.232 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.232 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.232 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.233 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.233 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.233 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.233 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.233 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.233 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.233 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.234 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.234 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.234 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.234 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.234 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.234 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.235 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.235 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.235 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.235 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.235 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.235 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.235 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.236 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.236 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.236 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.236 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.236 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.237 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.237 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.237 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.237 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.237 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.237 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.237 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.238 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.238 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.238 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.238 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.238 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.238 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.238 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.239 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.239 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.239 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.239 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.239 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.239 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.240 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.240 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.240 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.240 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.240 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.240 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.241 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.241 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.241 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.241 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.241 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.241 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.242 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.242 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.242 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.242 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.242 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.242 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.242 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.243 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.243 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.243 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.243 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.243 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.243 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.243 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.244 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.244 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.244 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.244 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.244 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.245 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.246 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.246 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.246 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.246 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.246 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.246 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.247 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.247 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.247 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.247 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.247 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.247 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.247 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.248 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.248 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.248 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.248 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.248 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.249 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.249 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.249 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.249 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.249 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.250 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.250 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.250 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.250 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.250 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.251 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.251 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.251 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.251 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.251 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.251 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.251 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.252 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.252 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.252 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.252 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.252 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.252 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.252 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.253 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.253 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.253 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.253 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.253 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.253 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.253 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.254 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.254 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.254 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.254 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.254 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.254 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.255 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.255 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.255 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.255 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.255 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.255 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.256 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.256 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.256 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.256 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.256 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.256 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.257 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.257 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.257 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.257 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.257 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.257 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.257 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.258 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.258 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.258 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.258 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.258 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.258 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.259 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.259 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.259 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.259 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.259 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.259 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.259 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.260 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.260 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.260 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.260 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.260 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.260 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.261 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.261 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.261 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.261 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.261 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.261 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.262 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.262 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.262 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.262 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.262 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.263 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.263 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.263 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.263 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.263 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.263 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.263 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.264 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.264 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.264 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.264 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.264 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.264 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.265 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.265 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.265 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.265 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.265 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.265 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.265 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.266 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.266 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.266 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.266 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.266 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.266 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.267 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.267 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.267 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.267 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.267 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.267 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.268 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.268 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.268 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.268 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.268 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.268 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.269 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.269 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.269 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.269 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.269 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.270 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.270 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.270 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.270 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.271 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.271 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.271 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.271 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.271 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.271 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.272 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.272 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.272 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.272 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.272 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.273 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.273 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.273 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.273 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.273 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.274 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.274 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.274 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.274 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.274 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.274 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.275 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.275 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.275 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.275 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.275 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.275 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.275 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.276 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.276 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.276 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.276 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.276 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.276 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.276 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.277 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.277 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.277 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.277 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.277 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.277 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.277 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.278 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.278 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.278 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.278 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.278 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.278 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.278 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.279 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.279 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.279 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.279 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.279 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.279 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.280 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.280 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.280 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.280 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.280 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.280 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.280 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.281 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.281 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.281 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.281 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.281 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.281 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.281 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.282 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.282 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.282 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.282 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.282 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.283 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.283 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.283 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.283 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.283 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.283 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.283 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.284 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.284 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.284 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.284 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.285 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.285 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.285 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.285 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.285 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.285 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.286 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.286 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.286 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.286 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.286 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.286 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.286 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.287 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.287 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.287 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.287 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.287 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.287 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.288 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.288 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.288 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.288 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.288 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.288 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.289 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.289 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.289 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.289 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.289 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.289 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.289 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.290 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.290 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.290 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.290 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.290 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.290 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.290 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.291 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.291 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.291 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.291 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.291 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.291 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.292 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.292 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.292 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.292 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.292 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.292 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.293 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.293 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.293 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.293 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.293 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.293 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.294 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.294 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.294 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.294 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.294 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.294 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.295 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.295 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.295 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.295 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.295 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.295 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.295 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.296 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.296 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.296 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.296 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.296 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.297 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.297 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.297 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.297 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.297 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.297 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.297 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.298 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.298 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.298 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.298 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.298 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.299 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.299 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.299 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.299 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.299 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.299 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.300 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.300 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.300 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.300 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.300 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.300 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.301 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.301 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.301 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.301 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.301 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.301 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.302 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.302 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.302 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.302 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.302 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.303 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.303 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.303 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.303 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.303 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.304 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.304 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.304 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.304 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.304 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.305 257094 WARNING oslo_config.cfg [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 05 10:07:53 compute-0 nova_compute[257087]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 05 10:07:53 compute-0 nova_compute[257087]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 05 10:07:53 compute-0 nova_compute[257087]: and ``live_migration_inbound_addr`` respectively.
Dec 05 10:07:53 compute-0 nova_compute[257087]: ).  Its value may be silently ignored in the future.
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.305 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.305 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.305 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.306 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.306 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.306 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.306 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.306 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.307 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.307 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.307 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.307 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.307 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.307 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.308 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.308 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.308 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.308 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.308 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.rbd_secret_uuid        = 3c63ce0f-5206-59ae-8381-b67d0b6424b5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.309 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.309 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.309 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.309 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.310 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.310 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.310 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.310 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.310 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.311 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.311 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.311 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.311 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.311 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.311 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.311 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.312 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.312 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.312 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.312 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.312 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.312 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.313 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.313 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.313 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.313 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.313 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.313 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.313 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.314 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.314 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.314 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.314 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.314 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.315 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.315 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.315 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.315 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.315 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.315 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.315 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.315 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.316 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.316 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.316 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.316 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.316 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.316 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.316 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.317 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.317 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.317 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.317 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.317 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.317 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.318 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.318 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.318 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.318 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.318 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.318 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.318 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.319 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.319 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.319 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.319 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.319 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.319 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.320 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.320 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.320 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.320 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.320 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.320 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.320 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.320 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.321 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.321 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.321 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.321 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.321 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.321 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.321 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.322 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.322 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.322 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.322 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.322 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.323 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.323 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.323 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.323 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.323 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.323 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.323 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.324 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.324 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.324 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.324 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.324 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.324 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.325 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.325 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.325 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.325 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.325 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.325 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.326 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.326 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.326 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.326 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.326 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.326 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.327 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.327 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.327 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.392 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.392 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.392 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.392 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.393 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.393 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.393 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.393 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.393 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.394 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.394 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.394 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.394 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.394 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.395 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.395 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.395 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.395 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.395 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.395 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.396 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.396 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.396 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.396 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.396 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.396 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.397 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.397 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.397 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.397 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.397 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.397 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.397 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.398 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.398 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.398 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.398 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.398 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.398 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.398 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.399 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.399 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.399 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.399 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.399 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.399 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.400 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.400 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.400 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.400 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.400 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.401 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.401 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.401 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.401 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.401 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.401 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.402 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.402 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.402 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.402 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.403 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.403 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.403 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.403 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.404 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.404 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.404 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.404 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.404 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.405 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.405 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.405 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.405 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.405 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.405 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.405 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.406 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.406 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.406 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.406 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.406 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.406 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.406 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.407 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.407 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.407 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.407 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.407 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.407 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.407 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.408 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.408 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.408 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.408 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.408 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.408 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.408 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.409 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.409 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.409 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.409 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.409 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.409 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.410 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.410 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.410 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.410 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.410 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.411 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.411 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.411 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.411 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.412 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.412 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.412 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.412 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.413 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.413 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.413 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.413 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.413 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.414 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.414 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.414 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.414 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.415 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.415 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.415 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.415 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.415 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.416 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.416 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.416 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.416 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.416 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.417 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.417 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.417 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.417 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.417 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.418 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.418 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.418 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.418 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.419 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.419 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.419 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.419 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.420 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.420 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.420 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.420 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.420 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.421 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.421 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.421 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.421 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.421 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.421 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.422 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.422 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.422 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.422 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.422 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.422 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.422 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.423 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.423 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.423 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.423 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.423 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.423 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.423 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.424 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.424 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.424 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.424 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.424 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.424 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.424 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.425 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.425 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.425 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.425 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.425 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.425 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.425 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.426 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.426 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.426 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.426 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.426 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.426 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.426 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.427 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.427 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.427 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.427 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.427 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.427 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.427 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.428 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.428 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.428 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.428 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.428 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.428 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.428 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.429 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.429 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.429 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.429 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.429 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.429 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.429 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.430 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.430 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.430 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.430 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.430 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.430 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.430 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.430 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.431 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.431 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.431 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.431 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.431 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.431 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.431 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.432 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.432 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.432 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.432 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.432 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.432 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.432 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.432 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.433 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.433 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.433 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.433 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.433 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.433 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.433 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.434 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.434 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.434 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.434 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.434 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.434 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.434 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.435 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.435 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.435 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.435 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.435 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.435 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.435 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.436 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.436 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.436 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.436 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.436 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.436 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.436 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.437 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.437 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.437 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.437 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.437 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.437 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.437 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.437 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.438 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.438 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.438 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.438 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.438 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.438 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.438 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.439 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.439 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.439 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.439 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.439 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.439 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.440 257094 DEBUG oslo_service.service [None req-41e95d2b-1bb4-4088-a9ae-ec3d4e882f4d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.441 257094 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.454 257094 INFO nova.virt.node [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Determined node identity bad8518e-442e-4fc2-b7f3-2c453f1840d6 from /var/lib/nova/compute_id
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.455 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.456 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.456 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.457 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.476 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb7bc8abd00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.479 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb7bc8abd00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.480 257094 INFO nova.virt.libvirt.driver [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Connection event '1' reason 'None'
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.488 257094 INFO nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Libvirt host capabilities <capabilities>
Dec 05 10:07:53 compute-0 nova_compute[257087]: 
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <host>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <uuid>f275b88f-2c99-47a9-a747-5d8960473fbf</uuid>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <cpu>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <arch>x86_64</arch>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model>EPYC-Rome-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <vendor>AMD</vendor>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <microcode version='16777317'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <signature family='23' model='49' stepping='0'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='x2apic'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='tsc-deadline'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='osxsave'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='hypervisor'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='tsc_adjust'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='spec-ctrl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='stibp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='arch-capabilities'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='cmp_legacy'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='topoext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='virt-ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='lbrv'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='tsc-scale'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='vmcb-clean'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='pause-filter'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='pfthreshold'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='svme-addr-chk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='rdctl-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='skip-l1dfl-vmentry'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='mds-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature name='pschange-mc-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <pages unit='KiB' size='4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <pages unit='KiB' size='2048'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <pages unit='KiB' size='1048576'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </cpu>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <power_management>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <suspend_mem/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </power_management>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <iommu support='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <migration_features>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <live/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <uri_transports>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <uri_transport>tcp</uri_transport>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <uri_transport>rdma</uri_transport>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </uri_transports>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </migration_features>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <topology>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <cells num='1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <cell id='0'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:           <memory unit='KiB'>7864316</memory>
Dec 05 10:07:53 compute-0 nova_compute[257087]:           <pages unit='KiB' size='4'>1966079</pages>
Dec 05 10:07:53 compute-0 nova_compute[257087]:           <pages unit='KiB' size='2048'>0</pages>
Dec 05 10:07:53 compute-0 nova_compute[257087]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 05 10:07:53 compute-0 nova_compute[257087]:           <distances>
Dec 05 10:07:53 compute-0 nova_compute[257087]:             <sibling id='0' value='10'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:           </distances>
Dec 05 10:07:53 compute-0 nova_compute[257087]:           <cpus num='8'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:           </cpus>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         </cell>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </cells>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </topology>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <cache>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </cache>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <secmodel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model>selinux</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <doi>0</doi>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </secmodel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <secmodel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model>dac</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <doi>0</doi>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </secmodel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </host>
Dec 05 10:07:53 compute-0 nova_compute[257087]: 
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <guest>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <os_type>hvm</os_type>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <arch name='i686'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <wordsize>32</wordsize>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <domain type='qemu'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <domain type='kvm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </arch>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <features>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <pae/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <nonpae/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <acpi default='on' toggle='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <apic default='on' toggle='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <cpuselection/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <deviceboot/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <disksnapshot default='on' toggle='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <externalSnapshot/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </features>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </guest>
Dec 05 10:07:53 compute-0 nova_compute[257087]: 
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <guest>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <os_type>hvm</os_type>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <arch name='x86_64'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <wordsize>64</wordsize>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <domain type='qemu'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <domain type='kvm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </arch>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <features>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <acpi default='on' toggle='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <apic default='on' toggle='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <cpuselection/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <deviceboot/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <disksnapshot default='on' toggle='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <externalSnapshot/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </features>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </guest>
Dec 05 10:07:53 compute-0 nova_compute[257087]: 
Dec 05 10:07:53 compute-0 nova_compute[257087]: </capabilities>
Dec 05 10:07:53 compute-0 nova_compute[257087]: 
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.496 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.498 257094 DEBUG nova.virt.libvirt.volume.mount [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.502 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 05 10:07:53 compute-0 nova_compute[257087]: <domainCapabilities>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <domain>kvm</domain>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <arch>i686</arch>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <vcpu max='4096'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <iothreads supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <os supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <enum name='firmware'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <loader supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>rom</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pflash</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='readonly'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>yes</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>no</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='secure'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>no</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </loader>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </os>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <cpu>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='host-passthrough' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='hostPassthroughMigratable'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>on</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>off</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='maximum' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='maximumMigratable'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>on</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>off</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='host-model' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <vendor>AMD</vendor>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='x2apic'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='hypervisor'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='stibp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='overflow-recov'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='succor'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='lbrv'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc-scale'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='flushbyasid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='pause-filter'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='pfthreshold'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='disable' name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='custom' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Dhyana-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Genoa'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='auto-ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='auto-ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-128'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-256'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-512'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v6'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v7'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='KnightsMill'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512er'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512pf'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='KnightsMill-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512er'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512pf'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G4-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tbm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G5-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tbm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SierraForest'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cmpccxadd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SierraForest-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cmpccxadd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='athlon'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='athlon-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='core2duo'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='core2duo-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='coreduo'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='coreduo-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='n270'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='n270-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='phenom'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='phenom-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </cpu>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <memoryBacking supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <enum name='sourceType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>file</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>anonymous</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>memfd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </memoryBacking>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <devices>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <disk supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='diskDevice'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>disk</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>cdrom</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>floppy</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>lun</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='bus'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>fdc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>scsi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>sata</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-non-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </disk>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <graphics supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vnc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>egl-headless</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dbus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </graphics>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <video supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='modelType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vga</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>cirrus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>none</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>bochs</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>ramfb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </video>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <hostdev supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='mode'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>subsystem</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='startupPolicy'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>default</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>mandatory</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>requisite</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>optional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='subsysType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pci</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>scsi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='capsType'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='pciBackend'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </hostdev>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <rng supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-non-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>random</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>egd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>builtin</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </rng>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <filesystem supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='driverType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>path</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>handle</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtiofs</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </filesystem>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <tpm supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tpm-tis</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tpm-crb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>emulator</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>external</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendVersion'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>2.0</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </tpm>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <redirdev supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='bus'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </redirdev>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <channel supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pty</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>unix</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </channel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <crypto supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>qemu</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>builtin</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </crypto>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <interface supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>default</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>passt</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </interface>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <panic supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>isa</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>hyperv</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </panic>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <console supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>null</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pty</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dev</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>file</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pipe</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>stdio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>udp</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tcp</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>unix</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>qemu-vdagent</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dbus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </console>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </devices>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <features>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <gic supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <vmcoreinfo supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <genid supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <backingStoreInput supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <backup supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <async-teardown supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <ps2 supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <sev supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <sgx supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <hyperv supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='features'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>relaxed</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vapic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>spinlocks</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vpindex</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>runtime</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>synic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>stimer</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>reset</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vendor_id</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>frequencies</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>reenlightenment</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tlbflush</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>ipi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>avic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>emsr_bitmap</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>xmm_input</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <defaults>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <spinlocks>4095</spinlocks>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <stimer_direct>on</stimer_direct>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </defaults>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </hyperv>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <launchSecurity supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='sectype'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tdx</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </launchSecurity>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </features>
Dec 05 10:07:53 compute-0 nova_compute[257087]: </domainCapabilities>
Dec 05 10:07:53 compute-0 nova_compute[257087]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.511 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 05 10:07:53 compute-0 nova_compute[257087]: <domainCapabilities>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <domain>kvm</domain>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <arch>i686</arch>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <vcpu max='240'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <iothreads supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <os supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <enum name='firmware'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <loader supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>rom</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pflash</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='readonly'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>yes</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>no</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='secure'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>no</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </loader>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </os>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <cpu>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='host-passthrough' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='hostPassthroughMigratable'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>on</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>off</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='maximum' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='maximumMigratable'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>on</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>off</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='host-model' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <vendor>AMD</vendor>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='x2apic'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='hypervisor'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='stibp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='overflow-recov'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='succor'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='lbrv'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc-scale'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='flushbyasid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='pause-filter'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='pfthreshold'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='disable' name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='custom' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Dhyana-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Genoa'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='auto-ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='auto-ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:53.580Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:53.583Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-128'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-256'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-512'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v6'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v7'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='KnightsMill'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512er'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512pf'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='KnightsMill-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512er'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512pf'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G4-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tbm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G5-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tbm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SierraForest'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cmpccxadd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SierraForest-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cmpccxadd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='athlon'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='athlon-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='core2duo'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='core2duo-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='coreduo'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='coreduo-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='n270'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='n270-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='phenom'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='phenom-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </cpu>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <memoryBacking supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <enum name='sourceType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>file</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>anonymous</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>memfd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </memoryBacking>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <devices>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <disk supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='diskDevice'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>disk</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>cdrom</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>floppy</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>lun</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='bus'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>ide</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>fdc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>scsi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>sata</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-non-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </disk>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <graphics supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vnc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>egl-headless</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dbus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </graphics>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <video supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='modelType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vga</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>cirrus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>none</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>bochs</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>ramfb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </video>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <hostdev supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='mode'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>subsystem</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='startupPolicy'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>default</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>mandatory</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>requisite</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>optional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='subsysType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pci</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>scsi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='capsType'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='pciBackend'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </hostdev>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <rng supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-non-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>random</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>egd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>builtin</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </rng>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <filesystem supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='driverType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>path</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>handle</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtiofs</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </filesystem>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <tpm supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tpm-tis</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tpm-crb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>emulator</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>external</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendVersion'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>2.0</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </tpm>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <redirdev supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='bus'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </redirdev>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <channel supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pty</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>unix</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </channel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <crypto supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>qemu</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>builtin</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </crypto>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <interface supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>default</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>passt</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </interface>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <panic supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>isa</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>hyperv</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </panic>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <console supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>null</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pty</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dev</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>file</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pipe</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>stdio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>udp</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tcp</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>unix</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>qemu-vdagent</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dbus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </console>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </devices>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <features>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <gic supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <vmcoreinfo supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <genid supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <backingStoreInput supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <backup supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <async-teardown supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <ps2 supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <sev supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <sgx supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <hyperv supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='features'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>relaxed</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vapic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>spinlocks</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vpindex</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>runtime</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>synic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>stimer</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>reset</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vendor_id</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>frequencies</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>reenlightenment</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tlbflush</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>ipi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>avic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>emsr_bitmap</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>xmm_input</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <defaults>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <spinlocks>4095</spinlocks>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <stimer_direct>on</stimer_direct>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </defaults>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </hyperv>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <launchSecurity supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='sectype'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tdx</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </launchSecurity>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </features>
Dec 05 10:07:53 compute-0 nova_compute[257087]: </domainCapabilities>
Dec 05 10:07:53 compute-0 nova_compute[257087]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.544 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.549 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 05 10:07:53 compute-0 nova_compute[257087]: <domainCapabilities>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <domain>kvm</domain>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <arch>x86_64</arch>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <vcpu max='4096'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <iothreads supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <os supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <enum name='firmware'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>efi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <loader supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>rom</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pflash</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='readonly'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>yes</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>no</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='secure'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>yes</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>no</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </loader>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </os>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <cpu>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='host-passthrough' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='hostPassthroughMigratable'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>on</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>off</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='maximum' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='maximumMigratable'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>on</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>off</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='host-model' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <vendor>AMD</vendor>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='x2apic'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='hypervisor'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='stibp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='overflow-recov'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='succor'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='lbrv'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc-scale'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='flushbyasid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='pause-filter'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='pfthreshold'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='disable' name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='custom' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Dhyana-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Genoa'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='auto-ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='auto-ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 10:07:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:53.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-128'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-256'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-512'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v6'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v7'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='KnightsMill'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512er'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512pf'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='KnightsMill-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512er'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512pf'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G4-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tbm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G5-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tbm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SierraForest'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cmpccxadd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SierraForest-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cmpccxadd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='athlon'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='athlon-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='core2duo'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='core2duo-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='coreduo'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='coreduo-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='n270'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='n270-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='phenom'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='phenom-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </cpu>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <memoryBacking supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <enum name='sourceType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>file</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>anonymous</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>memfd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </memoryBacking>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <devices>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <disk supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='diskDevice'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>disk</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>cdrom</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>floppy</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>lun</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='bus'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>fdc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>scsi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>sata</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-non-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </disk>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <graphics supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vnc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>egl-headless</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dbus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </graphics>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <video supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='modelType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vga</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>cirrus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>none</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>bochs</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>ramfb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </video>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <hostdev supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='mode'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>subsystem</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='startupPolicy'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>default</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>mandatory</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>requisite</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>optional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='subsysType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pci</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>scsi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='capsType'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='pciBackend'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </hostdev>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <rng supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-non-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>random</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>egd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>builtin</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </rng>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <filesystem supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='driverType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>path</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>handle</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtiofs</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </filesystem>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <tpm supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tpm-tis</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tpm-crb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>emulator</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>external</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendVersion'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>2.0</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </tpm>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <redirdev supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='bus'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </redirdev>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <channel supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pty</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>unix</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </channel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <crypto supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>qemu</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>builtin</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </crypto>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <interface supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>default</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>passt</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </interface>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <panic supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>isa</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>hyperv</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </panic>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <console supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>null</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pty</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dev</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>file</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pipe</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>stdio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>udp</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tcp</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>unix</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>qemu-vdagent</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dbus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </console>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </devices>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <features>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <gic supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <vmcoreinfo supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <genid supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <backingStoreInput supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <backup supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <async-teardown supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <ps2 supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <sev supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <sgx supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <hyperv supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='features'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>relaxed</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vapic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>spinlocks</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vpindex</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>runtime</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>synic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>stimer</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>reset</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vendor_id</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>frequencies</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>reenlightenment</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tlbflush</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>ipi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>avic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>emsr_bitmap</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>xmm_input</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <defaults>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <spinlocks>4095</spinlocks>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <stimer_direct>on</stimer_direct>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </defaults>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </hyperv>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <launchSecurity supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='sectype'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tdx</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </launchSecurity>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </features>
Dec 05 10:07:53 compute-0 nova_compute[257087]: </domainCapabilities>
Dec 05 10:07:53 compute-0 nova_compute[257087]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.638 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 05 10:07:53 compute-0 nova_compute[257087]: <domainCapabilities>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <domain>kvm</domain>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <arch>x86_64</arch>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <vcpu max='240'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <iothreads supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <os supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <enum name='firmware'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <loader supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>rom</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pflash</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='readonly'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>yes</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>no</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='secure'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>no</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </loader>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </os>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <cpu>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='host-passthrough' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='hostPassthroughMigratable'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>on</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>off</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='maximum' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='maximumMigratable'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>on</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>off</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='host-model' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <vendor>AMD</vendor>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='x2apic'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='hypervisor'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='stibp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='overflow-recov'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='succor'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='lbrv'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='tsc-scale'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='flushbyasid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='pause-filter'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='pfthreshold'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <feature policy='disable' name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <mode name='custom' supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Broadwell-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Cooperlake-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Denverton-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Dhyana-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Genoa'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='auto-ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='auto-ibrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Milan-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amd-psfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='no-nested-data-bp'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='null-sel-clr-base'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='stibp-always-on'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-Rome-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='EPYC-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='GraniteRapids-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-128'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-256'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx10-512'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='prefetchiti'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Haswell-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v6'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Icelake-Server-v7'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='IvyBridge-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='KnightsMill'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512er'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512pf'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='KnightsMill-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4fmaps'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-4vnniw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512er'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512pf'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G4-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tbm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Opteron_G5-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fma4'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tbm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xop'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SapphireRapids-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='amx-tile'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-bf16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-fp16'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512-vpopcntdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bitalg'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vbmi2'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrc'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fzrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='la57'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='taa-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='tsx-ldtrk'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xfd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SierraForest'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cmpccxadd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='SierraForest-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ifma'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-ne-convert'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx-vnni-int8'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='bus-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cmpccxadd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fbsdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='fsrs'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ibrs-all'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mcdt-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pbrsb-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='psdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='sbdr-ssdp-no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='serialize'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vaes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='vpclmulqdq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Client-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='hle'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='rtm'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Skylake-Server-v5'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512bw'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512cd'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512dq'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512f'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='avx512vl'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='invpcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pcid'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='pku'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='mpx'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v2'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v3'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='core-capability'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='split-lock-detect'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='Snowridge-v4'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='cldemote'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='erms'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='gfni'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdir64b'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='movdiri'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='xsaves'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='athlon'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='athlon-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='core2duo'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='core2duo-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='coreduo'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='coreduo-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='n270'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='n270-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='ss'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='phenom'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <blockers model='phenom-v1'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnow'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <feature name='3dnowext'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </blockers>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </mode>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </cpu>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <memoryBacking supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <enum name='sourceType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>file</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>anonymous</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <value>memfd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </memoryBacking>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <devices>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <disk supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='diskDevice'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>disk</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>cdrom</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>floppy</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>lun</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='bus'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>ide</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>fdc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>scsi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>sata</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-non-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </disk>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <graphics supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vnc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>egl-headless</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dbus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </graphics>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <video supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='modelType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vga</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>cirrus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>none</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>bochs</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>ramfb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </video>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <hostdev supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='mode'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>subsystem</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='startupPolicy'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>default</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>mandatory</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>requisite</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>optional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='subsysType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pci</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>scsi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='capsType'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='pciBackend'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </hostdev>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <rng supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtio-non-transitional</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>random</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>egd</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>builtin</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </rng>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <filesystem supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='driverType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>path</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>handle</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>virtiofs</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </filesystem>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <tpm supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tpm-tis</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tpm-crb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>emulator</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>external</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendVersion'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>2.0</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </tpm>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <redirdev supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='bus'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>usb</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </redirdev>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <channel supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pty</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>unix</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </channel>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <crypto supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>qemu</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendModel'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>builtin</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </crypto>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <interface supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='backendType'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>default</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>passt</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </interface>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <panic supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='model'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>isa</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>hyperv</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </panic>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <console supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='type'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>null</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vc</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pty</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dev</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>file</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>pipe</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>stdio</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>udp</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tcp</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>unix</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>qemu-vdagent</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>dbus</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </console>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </devices>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   <features>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <gic supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <vmcoreinfo supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <genid supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <backingStoreInput supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <backup supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <async-teardown supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <ps2 supported='yes'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <sev supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <sgx supported='no'/>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <hyperv supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='features'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>relaxed</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vapic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>spinlocks</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vpindex</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>runtime</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>synic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>stimer</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>reset</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>vendor_id</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>frequencies</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>reenlightenment</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tlbflush</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>ipi</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>avic</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>emsr_bitmap</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>xmm_input</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <defaults>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <spinlocks>4095</spinlocks>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <stimer_direct>on</stimer_direct>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </defaults>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </hyperv>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     <launchSecurity supported='yes'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       <enum name='sectype'>
Dec 05 10:07:53 compute-0 nova_compute[257087]:         <value>tdx</value>
Dec 05 10:07:53 compute-0 nova_compute[257087]:       </enum>
Dec 05 10:07:53 compute-0 nova_compute[257087]:     </launchSecurity>
Dec 05 10:07:53 compute-0 nova_compute[257087]:   </features>
Dec 05 10:07:53 compute-0 nova_compute[257087]: </domainCapabilities>
Dec 05 10:07:53 compute-0 nova_compute[257087]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.706 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.706 257094 INFO nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Secure Boot support detected
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.709 257094 INFO nova.virt.libvirt.driver [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.709 257094 INFO nova.virt.libvirt.driver [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.719 257094 DEBUG nova.virt.libvirt.driver [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.741 257094 INFO nova.virt.node [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Determined node identity bad8518e-442e-4fc2-b7f3-2c453f1840d6 from /var/lib/nova/compute_id
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.760 257094 WARNING nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Compute nodes ['bad8518e-442e-4fc2-b7f3-2c453f1840d6'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 05 10:07:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:53 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3930998513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:53 compute-0 ceph-mon[74418]: pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.941 257094 INFO nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.975 257094 WARNING nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.975 257094 DEBUG oslo_concurrency.lockutils [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.976 257094 DEBUG oslo_concurrency.lockutils [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.976 257094 DEBUG oslo_concurrency.lockutils [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.976 257094 DEBUG nova.compute.resource_tracker [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:07:53 compute-0 nova_compute[257087]: 2025-12-05 10:07:53.977 257094 DEBUG oslo_concurrency.processutils [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:07:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:07:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:54.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:07:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:07:54 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2906292990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:54 compute-0 nova_compute[257087]: 2025-12-05 10:07:54.478 257094 DEBUG oslo_concurrency.processutils [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:07:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100754 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:07:54 compute-0 nova_compute[257087]: 2025-12-05 10:07:54.681 257094 WARNING nova.virt.libvirt.driver [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:07:54 compute-0 nova_compute[257087]: 2025-12-05 10:07:54.683 257094 DEBUG nova.compute.resource_tracker [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4904MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:07:54 compute-0 nova_compute[257087]: 2025-12-05 10:07:54.684 257094 DEBUG oslo_concurrency.lockutils [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:07:54 compute-0 nova_compute[257087]: 2025-12-05 10:07:54.684 257094 DEBUG oslo_concurrency.lockutils [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:07:54 compute-0 nova_compute[257087]: 2025-12-05 10:07:54.702 257094 WARNING nova.compute.resource_tracker [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] No compute node record for compute-0.ctlplane.example.com:bad8518e-442e-4fc2-b7f3-2c453f1840d6: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host bad8518e-442e-4fc2-b7f3-2c453f1840d6 could not be found.
Dec 05 10:07:54 compute-0 nova_compute[257087]: 2025-12-05 10:07:54.725 257094 INFO nova.compute.resource_tracker [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: bad8518e-442e-4fc2-b7f3-2c453f1840d6
Dec 05 10:07:54 compute-0 nova_compute[257087]: 2025-12-05 10:07:54.807 257094 DEBUG nova.compute.resource_tracker [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:07:54 compute-0 nova_compute[257087]: 2025-12-05 10:07:54.808 257094 DEBUG nova.compute.resource_tracker [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:07:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3075544978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2906292990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1199275557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:54 compute-0 nova_compute[257087]: 2025-12-05 10:07:54.908 257094 INFO nova.scheduler.client.report [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] [req-e17a31bc-f38a-45df-8dfe-ba2c7f24605c] Created resource provider record via placement API for resource provider with UUID bad8518e-442e-4fc2-b7f3-2c453f1840d6 and name compute-0.ctlplane.example.com.
Dec 05 10:07:54 compute-0 nova_compute[257087]: 2025-12-05 10:07:54.928 257094 DEBUG oslo_concurrency.processutils [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:07:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:07:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:07:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197957415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.403 257094 DEBUG oslo_concurrency.processutils [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.408 257094 DEBUG nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 05 10:07:55 compute-0 nova_compute[257087]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.409 257094 INFO nova.virt.libvirt.host [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] kernel doesn't support AMD SEV
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.409 257094 DEBUG nova.compute.provider_tree [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Updating inventory in ProviderTree for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.410 257094 DEBUG nova.virt.libvirt.driver [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.452 257094 DEBUG nova.scheduler.client.report [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Updated inventory for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.452 257094 DEBUG nova.compute.provider_tree [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Updating resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.452 257094 DEBUG nova.compute.provider_tree [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Updating inventory in ProviderTree for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.598 257094 DEBUG nova.compute.provider_tree [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Updating resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.634 257094 DEBUG nova.compute.resource_tracker [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.635 257094 DEBUG oslo_concurrency.lockutils [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.635 257094 DEBUG nova.service [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 05 10:07:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:55] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:07:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:07:55] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:07:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:55.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.705 257094 DEBUG nova.service [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 05 10:07:55 compute-0 nova_compute[257087]: 2025-12-05 10:07:55.706 257094 DEBUG nova.servicegroup.drivers.db [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 05 10:07:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:55 compute-0 ceph-mon[74418]: pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:07:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3197957415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:07:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:07:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:56.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:07:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:07:57.291Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:07:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:07:57 compute-0 ceph-mon[74418]: pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:07:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:07:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:07:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:07:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:07:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:07:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:07:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:07:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:07:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:57.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:07:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:07:58.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:07:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:07:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:07:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:07:59 compute-0 podman[257443]: 2025-12-05 10:07:59.434781826 +0000 UTC m=+0.080786561 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 10:07:59 compute-0 podman[257444]: 2025-12-05 10:07:59.438443955 +0000 UTC m=+0.083221226 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:07:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:07:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:07:59.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:07:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:07:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:00.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:00 compute-0 ceph-mon[74418]: pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:08:00 compute-0 nova_compute[257087]: 2025-12-05 10:08:00.709 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:00 compute-0 nova_compute[257087]: 2025-12-05 10:08:00.787 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:08:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:01.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.003000080s ======
Dec 05 10:08:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:02.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 05 10:08:02 compute-0 podman[257485]: 2025-12-05 10:08:02.451117731 +0000 UTC m=+0.110060686 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Dec 05 10:08:02 compute-0 ceph-mon[74418]: pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:08:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:08:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:08:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:03.584Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:03.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:04.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:04 compute-0 ceph-mon[74418]: pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:08:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:08:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004280 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:05] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 10:08:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:05] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec 05 10:08:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:05.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:08:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:08:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:08:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:06.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:06 compute-0 ceph-mon[74418]: pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:08:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:07.292Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:08:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:07.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:08:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:08.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:08:08 compute-0 ceph-mon[74418]: pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:08:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:08:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00042c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 933 B/s wr, 3 op/s
Dec 05 10:08:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:09.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00042c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:10 compute-0 sudo[257521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:08:10 compute-0 sudo[257521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:10 compute-0 sudo[257521]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:10.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:10 compute-0 ceph-mon[74418]: pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 933 B/s wr, 3 op/s
Dec 05 10:08:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb400c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 933 B/s wr, 3 op/s
Dec 05 10:08:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:11.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00042c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:12.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:08:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:08:12 compute-0 ceph-mon[74418]: pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 933 B/s wr, 3 op/s
Dec 05 10:08:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:08:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 933 B/s wr, 3 op/s
Dec 05 10:08:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:13.585Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:13.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb400c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:14.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100814 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:08:15 compute-0 ceph-mon[74418]: pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 933 B/s wr, 3 op/s
Dec 05 10:08:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1018 B/s wr, 3 op/s
Dec 05 10:08:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:15] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:08:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:15] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:08:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:15.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00042c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb400c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:16.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:17 compute-0 ceph-mon[74418]: pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1018 B/s wr, 3 op/s
Dec 05 10:08:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:17.294Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 424 B/s wr, 1 op/s
Dec 05 10:08:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:17.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00042c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:18.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:19 compute-0 ceph-mon[74418]: pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 424 B/s wr, 1 op/s
Dec 05 10:08:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 424 B/s wr, 61 op/s
Dec 05 10:08:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00042c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:19.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:20.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:08:20.564 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:08:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:08:20.567 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:08:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:08:20.567 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:08:21 compute-0 ceph-mon[74418]: pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 424 B/s wr, 61 op/s
Dec 05 10:08:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:21 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb400c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Dec 05 10:08:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:21.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:21 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00042c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00042c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:22.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:23 compute-0 ceph-mon[74418]: pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Dec 05 10:08:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00042c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Dec 05 10:08:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:08:23 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2010074155' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:08:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:08:23 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2010074155' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:08:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:23.587Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:23.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:08:23 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2944294849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:08:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:08:23 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2944294849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:08:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb400c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:24 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2010074155' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:08:24 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2010074155' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:08:24 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2944294849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:08:24 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2944294849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:08:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:24.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:25 compute-0 ceph-mon[74418]: pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Dec 05 10:08:25 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/915862596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:08:25 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/915862596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:08:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Dec 05 10:08:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:25] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:08:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:25] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:08:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:25.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb400c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:26.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:27.296Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 10:08:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:27 compute-0 ceph-mon[74418]: pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:08:27
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.data', 'vms', '.nfs', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'images', '.mgr', 'cephfs.cephfs.meta']
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:08:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:08:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:08:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:27.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:08:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:08:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:28.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:28 compute-0 ceph-mon[74418]: pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 10:08:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:08:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 10:08:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:29.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:30 compute-0 sudo[257566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:08:30 compute-0 sudo[257566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:30 compute-0 sudo[257566]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:30 compute-0 podman[257591]: 2025-12-05 10:08:30.279505999 +0000 UTC m=+0.067409223 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:08:30 compute-0 podman[257592]: 2025-12-05 10:08:30.285978387 +0000 UTC m=+0.073880561 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:08:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:30.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:30 compute-0 ceph-mon[74418]: pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 10:08:30 compute-0 sudo[257627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:08:30 compute-0 sudo[257627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:30 compute-0 sudo[257627]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:30 compute-0 sudo[257652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:08:30 compute-0 sudo[257652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb400c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:31 compute-0 sudo[257652]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:08:31 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:08:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:08:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:08:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:08:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:08:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:08:31 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:08:31 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:08:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:08:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:08:31 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:08:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:08:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:08:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:08:31 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:08:31 compute-0 sudo[257708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:08:31 compute-0 sudo[257708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:31 compute-0 sudo[257708]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:31.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:31 compute-0 sudo[257733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:08:31 compute-0 sudo[257733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:32 compute-0 podman[257800]: 2025-12-05 10:08:32.227399782 +0000 UTC m=+0.053600256 container create 861a452090c0025cbc996d859bb342294e777f88fe4698ec1756220ab2470066 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:08:32 compute-0 systemd[1]: Started libpod-conmon-861a452090c0025cbc996d859bb342294e777f88fe4698ec1756220ab2470066.scope.
Dec 05 10:08:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:08:32 compute-0 podman[257800]: 2025-12-05 10:08:32.210041698 +0000 UTC m=+0.036242202 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:08:32 compute-0 podman[257800]: 2025-12-05 10:08:32.324836025 +0000 UTC m=+0.151036529 container init 861a452090c0025cbc996d859bb342294e777f88fe4698ec1756220ab2470066 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 05 10:08:32 compute-0 podman[257800]: 2025-12-05 10:08:32.33635563 +0000 UTC m=+0.162556134 container start 861a452090c0025cbc996d859bb342294e777f88fe4698ec1756220ab2470066 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:08:32 compute-0 podman[257800]: 2025-12-05 10:08:32.34076648 +0000 UTC m=+0.166967014 container attach 861a452090c0025cbc996d859bb342294e777f88fe4698ec1756220ab2470066 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 10:08:32 compute-0 dreamy_allen[257816]: 167 167
Dec 05 10:08:32 compute-0 systemd[1]: libpod-861a452090c0025cbc996d859bb342294e777f88fe4698ec1756220ab2470066.scope: Deactivated successfully.
Dec 05 10:08:32 compute-0 podman[257800]: 2025-12-05 10:08:32.344872033 +0000 UTC m=+0.171072507 container died 861a452090c0025cbc996d859bb342294e777f88fe4698ec1756220ab2470066 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:08:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f58a9f8900e0b9e892aa73e4f950977bf163fb510a7ffd5516ef33140a696dde-merged.mount: Deactivated successfully.
Dec 05 10:08:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:32 compute-0 podman[257800]: 2025-12-05 10:08:32.380299421 +0000 UTC m=+0.206499895 container remove 861a452090c0025cbc996d859bb342294e777f88fe4698ec1756220ab2470066 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:08:32 compute-0 systemd[1]: libpod-conmon-861a452090c0025cbc996d859bb342294e777f88fe4698ec1756220ab2470066.scope: Deactivated successfully.
Dec 05 10:08:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:32.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:32 compute-0 podman[257840]: 2025-12-05 10:08:32.604420307 +0000 UTC m=+0.069843441 container create c45c42e560a988ba242cdafa61ee5efd7a7ca175188e7f708f135cff1a3d3356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 10:08:32 compute-0 ceph-mon[74418]: pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:08:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:08:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:08:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:08:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:08:32 compute-0 systemd[1]: Started libpod-conmon-c45c42e560a988ba242cdafa61ee5efd7a7ca175188e7f708f135cff1a3d3356.scope.
Dec 05 10:08:32 compute-0 podman[257840]: 2025-12-05 10:08:32.577852641 +0000 UTC m=+0.043275825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:08:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:08:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb70c0b3c2683fdf05aedaa378368286c0cf8c2d8d0ceab8706fd6eafeba746a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb70c0b3c2683fdf05aedaa378368286c0cf8c2d8d0ceab8706fd6eafeba746a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb70c0b3c2683fdf05aedaa378368286c0cf8c2d8d0ceab8706fd6eafeba746a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb70c0b3c2683fdf05aedaa378368286c0cf8c2d8d0ceab8706fd6eafeba746a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb70c0b3c2683fdf05aedaa378368286c0cf8c2d8d0ceab8706fd6eafeba746a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:32 compute-0 podman[257840]: 2025-12-05 10:08:32.719152123 +0000 UTC m=+0.184575247 container init c45c42e560a988ba242cdafa61ee5efd7a7ca175188e7f708f135cff1a3d3356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 05 10:08:32 compute-0 podman[257840]: 2025-12-05 10:08:32.738554213 +0000 UTC m=+0.203977307 container start c45c42e560a988ba242cdafa61ee5efd7a7ca175188e7f708f135cff1a3d3356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 10:08:32 compute-0 podman[257840]: 2025-12-05 10:08:32.745201315 +0000 UTC m=+0.210624399 container attach c45c42e560a988ba242cdafa61ee5efd7a7ca175188e7f708f135cff1a3d3356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_snyder, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 05 10:08:32 compute-0 podman[257854]: 2025-12-05 10:08:32.803406806 +0000 UTC m=+0.140685507 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:08:33 compute-0 gallant_snyder[257857]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:08:33 compute-0 gallant_snyder[257857]: --> All data devices are unavailable
Dec 05 10:08:33 compute-0 systemd[1]: libpod-c45c42e560a988ba242cdafa61ee5efd7a7ca175188e7f708f135cff1a3d3356.scope: Deactivated successfully.
Dec 05 10:08:33 compute-0 podman[257840]: 2025-12-05 10:08:33.138130285 +0000 UTC m=+0.603553419 container died c45c42e560a988ba242cdafa61ee5efd7a7ca175188e7f708f135cff1a3d3356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_snyder, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:08:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb70c0b3c2683fdf05aedaa378368286c0cf8c2d8d0ceab8706fd6eafeba746a-merged.mount: Deactivated successfully.
Dec 05 10:08:33 compute-0 podman[257840]: 2025-12-05 10:08:33.347215909 +0000 UTC m=+0.812638993 container remove c45c42e560a988ba242cdafa61ee5efd7a7ca175188e7f708f135cff1a3d3356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 10:08:33 compute-0 systemd[1]: libpod-conmon-c45c42e560a988ba242cdafa61ee5efd7a7ca175188e7f708f135cff1a3d3356.scope: Deactivated successfully.
Dec 05 10:08:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:33 compute-0 sudo[257733]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:33 compute-0 sudo[257911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:08:33 compute-0 sudo[257911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:33 compute-0 sudo[257911]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:33 compute-0 sudo[257936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:08:33 compute-0 sudo[257936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:33.588Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:08:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:33.590Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:08:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:33.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:08:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb400c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:34 compute-0 podman[258001]: 2025-12-05 10:08:33.929833684 +0000 UTC m=+0.028455098 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:08:34 compute-0 podman[258001]: 2025-12-05 10:08:34.047453669 +0000 UTC m=+0.146075083 container create b5c9d35de4bc65cd96ffe14365a1b949463400d8ac463e00791635e97ab0fb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kowalevski, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:08:34 compute-0 systemd[1]: Started libpod-conmon-b5c9d35de4bc65cd96ffe14365a1b949463400d8ac463e00791635e97ab0fb37.scope.
Dec 05 10:08:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:08:34 compute-0 podman[258001]: 2025-12-05 10:08:34.226088122 +0000 UTC m=+0.324709536 container init b5c9d35de4bc65cd96ffe14365a1b949463400d8ac463e00791635e97ab0fb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 10:08:34 compute-0 podman[258001]: 2025-12-05 10:08:34.239930171 +0000 UTC m=+0.338551545 container start b5c9d35de4bc65cd96ffe14365a1b949463400d8ac463e00791635e97ab0fb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kowalevski, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Dec 05 10:08:34 compute-0 awesome_kowalevski[258017]: 167 167
Dec 05 10:08:34 compute-0 systemd[1]: libpod-b5c9d35de4bc65cd96ffe14365a1b949463400d8ac463e00791635e97ab0fb37.scope: Deactivated successfully.
Dec 05 10:08:34 compute-0 podman[258001]: 2025-12-05 10:08:34.250189631 +0000 UTC m=+0.348811035 container attach b5c9d35de4bc65cd96ffe14365a1b949463400d8ac463e00791635e97ab0fb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kowalevski, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 10:08:34 compute-0 podman[258001]: 2025-12-05 10:08:34.25162707 +0000 UTC m=+0.350248454 container died b5c9d35de4bc65cd96ffe14365a1b949463400d8ac463e00791635e97ab0fb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 10:08:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a877a5cf11aeb1456bc5a345e097a2e119726bfb20717e03f5f025baa0b0a96a-merged.mount: Deactivated successfully.
Dec 05 10:08:34 compute-0 podman[258001]: 2025-12-05 10:08:34.38771402 +0000 UTC m=+0.486335404 container remove b5c9d35de4bc65cd96ffe14365a1b949463400d8ac463e00791635e97ab0fb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kowalevski, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:08:34 compute-0 systemd[1]: libpod-conmon-b5c9d35de4bc65cd96ffe14365a1b949463400d8ac463e00791635e97ab0fb37.scope: Deactivated successfully.
Dec 05 10:08:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:34.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:34 compute-0 podman[258043]: 2025-12-05 10:08:34.630368753 +0000 UTC m=+0.071934197 container create 434bf6374dc60df1fcbb2748beb5276cf707882f75923d78eb88bbea17346aac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:08:34 compute-0 systemd[1]: Started libpod-conmon-434bf6374dc60df1fcbb2748beb5276cf707882f75923d78eb88bbea17346aac.scope.
Dec 05 10:08:34 compute-0 podman[258043]: 2025-12-05 10:08:34.602922573 +0000 UTC m=+0.044488107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:08:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:08:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b671b64af4e9f088d271c3bcee7b3c39cbf2e004181dc45b93d73205d85441f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b671b64af4e9f088d271c3bcee7b3c39cbf2e004181dc45b93d73205d85441f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b671b64af4e9f088d271c3bcee7b3c39cbf2e004181dc45b93d73205d85441f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b671b64af4e9f088d271c3bcee7b3c39cbf2e004181dc45b93d73205d85441f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:34 compute-0 podman[258043]: 2025-12-05 10:08:34.72539329 +0000 UTC m=+0.166958754 container init 434bf6374dc60df1fcbb2748beb5276cf707882f75923d78eb88bbea17346aac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 10:08:34 compute-0 podman[258043]: 2025-12-05 10:08:34.731551049 +0000 UTC m=+0.173116523 container start 434bf6374dc60df1fcbb2748beb5276cf707882f75923d78eb88bbea17346aac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 05 10:08:34 compute-0 ceph-mon[74418]: pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:34 compute-0 podman[258043]: 2025-12-05 10:08:34.751088433 +0000 UTC m=+0.192653877 container attach 434bf6374dc60df1fcbb2748beb5276cf707882f75923d78eb88bbea17346aac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]: {
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:     "1": [
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:         {
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             "devices": [
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "/dev/loop3"
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             ],
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             "lv_name": "ceph_lv0",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             "lv_size": "21470642176",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             "name": "ceph_lv0",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             "tags": {
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.cluster_name": "ceph",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.crush_device_class": "",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.encrypted": "0",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.osd_id": "1",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.type": "block",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.vdo": "0",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:                 "ceph.with_tpm": "0"
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             },
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             "type": "block",
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:             "vg_name": "ceph_vg0"
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:         }
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]:     ]
Dec 05 10:08:35 compute-0 elegant_mirzakhani[258059]: }
Dec 05 10:08:35 compute-0 systemd[1]: libpod-434bf6374dc60df1fcbb2748beb5276cf707882f75923d78eb88bbea17346aac.scope: Deactivated successfully.
Dec 05 10:08:35 compute-0 podman[258043]: 2025-12-05 10:08:35.079386856 +0000 UTC m=+0.520952310 container died 434bf6374dc60df1fcbb2748beb5276cf707882f75923d78eb88bbea17346aac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 05 10:08:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:08:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b671b64af4e9f088d271c3bcee7b3c39cbf2e004181dc45b93d73205d85441f5-merged.mount: Deactivated successfully.
Dec 05 10:08:35 compute-0 podman[258043]: 2025-12-05 10:08:35.580772231 +0000 UTC m=+1.022337665 container remove 434bf6374dc60df1fcbb2748beb5276cf707882f75923d78eb88bbea17346aac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 10:08:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:35] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec 05 10:08:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:35] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec 05 10:08:35 compute-0 sudo[257936]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:35 compute-0 systemd[1]: libpod-conmon-434bf6374dc60df1fcbb2748beb5276cf707882f75923d78eb88bbea17346aac.scope: Deactivated successfully.
Dec 05 10:08:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:35.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:35 compute-0 sudo[258080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:08:35 compute-0 sudo[258080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:35 compute-0 sudo[258080]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:35 compute-0 sudo[258105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:08:35 compute-0 sudo[258105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:36 compute-0 podman[258172]: 2025-12-05 10:08:36.251756701 +0000 UTC m=+0.031760189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:08:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb400c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:36.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:36 compute-0 podman[258172]: 2025-12-05 10:08:36.481185442 +0000 UTC m=+0.261188880 container create fe7c16ec308e77ffa92fc75bb8c3636f5522d385dabfb01bb52eea3ec8e270c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_colden, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 10:08:36 compute-0 systemd[1]: Started libpod-conmon-fe7c16ec308e77ffa92fc75bb8c3636f5522d385dabfb01bb52eea3ec8e270c4.scope.
Dec 05 10:08:36 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:08:36 compute-0 podman[258172]: 2025-12-05 10:08:36.892385502 +0000 UTC m=+0.672388920 container init fe7c16ec308e77ffa92fc75bb8c3636f5522d385dabfb01bb52eea3ec8e270c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_colden, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:08:36 compute-0 podman[258172]: 2025-12-05 10:08:36.899681371 +0000 UTC m=+0.679684769 container start fe7c16ec308e77ffa92fc75bb8c3636f5522d385dabfb01bb52eea3ec8e270c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_colden, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:08:36 compute-0 priceless_colden[258189]: 167 167
Dec 05 10:08:36 compute-0 systemd[1]: libpod-fe7c16ec308e77ffa92fc75bb8c3636f5522d385dabfb01bb52eea3ec8e270c4.scope: Deactivated successfully.
Dec 05 10:08:36 compute-0 podman[258172]: 2025-12-05 10:08:36.982106234 +0000 UTC m=+0.762109652 container attach fe7c16ec308e77ffa92fc75bb8c3636f5522d385dabfb01bb52eea3ec8e270c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 05 10:08:36 compute-0 ceph-mon[74418]: pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:08:36 compute-0 podman[258172]: 2025-12-05 10:08:36.983389329 +0000 UTC m=+0.763392757 container died fe7c16ec308e77ffa92fc75bb8c3636f5522d385dabfb01bb52eea3ec8e270c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_colden, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 10:08:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5ce1d658cb628f3e86ce18130fc53e9e9d629c9721cec200a98f86c42cf019c-merged.mount: Deactivated successfully.
Dec 05 10:08:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:37.297Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:37 compute-0 podman[258172]: 2025-12-05 10:08:37.346957656 +0000 UTC m=+1.126961064 container remove fe7c16ec308e77ffa92fc75bb8c3636f5522d385dabfb01bb52eea3ec8e270c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 10:08:37 compute-0 systemd[1]: libpod-conmon-fe7c16ec308e77ffa92fc75bb8c3636f5522d385dabfb01bb52eea3ec8e270c4.scope: Deactivated successfully.
Dec 05 10:08:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:37 compute-0 podman[258213]: 2025-12-05 10:08:37.497965564 +0000 UTC m=+0.023755660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:08:37 compute-0 podman[258213]: 2025-12-05 10:08:37.653940058 +0000 UTC m=+0.179730154 container create bffeda392e9088499c3e03254177528ccd58dc4f7f25ce680c06ba3380890b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 10:08:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:37 compute-0 systemd[1]: Started libpod-conmon-bffeda392e9088499c3e03254177528ccd58dc4f7f25ce680c06ba3380890b25.scope.
Dec 05 10:08:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90d0c9eba0b7308f756f16e4f92d67835ef75d90cd97a6c279b156f584f70b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90d0c9eba0b7308f756f16e4f92d67835ef75d90cd97a6c279b156f584f70b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90d0c9eba0b7308f756f16e4f92d67835ef75d90cd97a6c279b156f584f70b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90d0c9eba0b7308f756f16e4f92d67835ef75d90cd97a6c279b156f584f70b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:38 compute-0 podman[258213]: 2025-12-05 10:08:38.058912786 +0000 UTC m=+0.584702872 container init bffeda392e9088499c3e03254177528ccd58dc4f7f25ce680c06ba3380890b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gagarin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:08:38 compute-0 podman[258213]: 2025-12-05 10:08:38.066172285 +0000 UTC m=+0.591962351 container start bffeda392e9088499c3e03254177528ccd58dc4f7f25ce680c06ba3380890b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:08:38 compute-0 podman[258213]: 2025-12-05 10:08:38.232819699 +0000 UTC m=+0.758609785 container attach bffeda392e9088499c3e03254177528ccd58dc4f7f25ce680c06ba3380890b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 10:08:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100838 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:08:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:38.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:38 compute-0 lvm[258306]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:08:38 compute-0 lvm[258306]: VG ceph_vg0 finished
Dec 05 10:08:38 compute-0 zealous_gagarin[258229]: {}
Dec 05 10:08:38 compute-0 systemd[1]: libpod-bffeda392e9088499c3e03254177528ccd58dc4f7f25ce680c06ba3380890b25.scope: Deactivated successfully.
Dec 05 10:08:38 compute-0 systemd[1]: libpod-bffeda392e9088499c3e03254177528ccd58dc4f7f25ce680c06ba3380890b25.scope: Consumed 1.339s CPU time.
Dec 05 10:08:38 compute-0 podman[258213]: 2025-12-05 10:08:38.967536202 +0000 UTC m=+1.493326368 container died bffeda392e9088499c3e03254177528ccd58dc4f7f25ce680c06ba3380890b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gagarin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 10:08:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e90d0c9eba0b7308f756f16e4f92d67835ef75d90cd97a6c279b156f584f70b3-merged.mount: Deactivated successfully.
Dec 05 10:08:39 compute-0 podman[258213]: 2025-12-05 10:08:39.027240764 +0000 UTC m=+1.553030830 container remove bffeda392e9088499c3e03254177528ccd58dc4f7f25ce680c06ba3380890b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 10:08:39 compute-0 systemd[1]: libpod-conmon-bffeda392e9088499c3e03254177528ccd58dc4f7f25ce680c06ba3380890b25.scope: Deactivated successfully.
Dec 05 10:08:39 compute-0 sudo[258105]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:08:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:08:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:08:39 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:08:39 compute-0 sudo[258322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:08:39 compute-0 sudo[258322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:39 compute-0 sudo[258322]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:39 compute-0 ceph-mon[74418]: pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:08:39 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:08:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:39.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:40.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:40 compute-0 ceph-mon[74418]: pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:41.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:42.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:42 compute-0 ceph-mon[74418]: pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:08:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:08:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:08:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:43.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:43.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:44.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:44 compute-0 ceph-mon[74418]: pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:08:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:08:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:45] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec 05 10:08:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:45] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec 05 10:08:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:46.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:46 compute-0 ceph-mon[74418]: pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:08:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:47.297Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:08:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:08:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:47.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:48.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:48 compute-0 ceph-mon[74418]: pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:08:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 05 10:08:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:49.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:50 compute-0 sudo[258359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:08:50 compute-0 sudo[258359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:08:50 compute-0 sudo[258359]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:08:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:08:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:50.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:50 compute-0 ceph-mon[74418]: pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 05 10:08:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 05 10:08:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:51 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/897659228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:08:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:51.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:52.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.534 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.537 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.538 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.539 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:08:52 compute-0 ceph-mon[74418]: pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 05 10:08:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2069343274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.816 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.817 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.817 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.817 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.817 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.818 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.818 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.818 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.818 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.872 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.874 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.874 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.874 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:08:52 compute-0 nova_compute[257087]: 2025-12-05 10:08:52.875 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:08:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:08:53 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2889418186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:08:53 compute-0 nova_compute[257087]: 2025-12-05 10:08:53.376 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:08:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 05 10:08:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:08:53 compute-0 nova_compute[257087]: 2025-12-05 10:08:53.543 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:08:53 compute-0 nova_compute[257087]: 2025-12-05 10:08:53.545 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4915MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:08:53 compute-0 nova_compute[257087]: 2025-12-05 10:08:53.545 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:08:53 compute-0 nova_compute[257087]: 2025-12-05 10:08:53.545 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:08:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:53.592Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:08:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:53.593Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:53 compute-0 nova_compute[257087]: 2025-12-05 10:08:53.630 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:08:53 compute-0 nova_compute[257087]: 2025-12-05 10:08:53.630 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:08:53 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4273942271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:08:53 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2889418186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:08:53 compute-0 nova_compute[257087]: 2025-12-05 10:08:53.713 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:08:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:53.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:08:54 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198083696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:08:54 compute-0 nova_compute[257087]: 2025-12-05 10:08:54.247 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:08:54 compute-0 nova_compute[257087]: 2025-12-05 10:08:54.254 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:08:54 compute-0 nova_compute[257087]: 2025-12-05 10:08:54.279 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:08:54 compute-0 nova_compute[257087]: 2025-12-05 10:08:54.281 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:08:54 compute-0 nova_compute[257087]: 2025-12-05 10:08:54.281 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:08:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:54.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:54 compute-0 ceph-mon[74418]: pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 05 10:08:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/426807961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:08:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/198083696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:08:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:08:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:55] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec 05 10:08:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:08:55] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec 05 10:08:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:55.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:56.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100856 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:08:56 compute-0 ceph-mon[74418]: pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:08:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:08:57.298Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:08:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:08:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:08:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:08:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:08:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:08:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:08:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:08:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:08:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:08:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:08:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:08:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:57.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:08:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:08:58.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:08:58 compute-0 ceph-mon[74418]: pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:08:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:08:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:08:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:08:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:08:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:08:59.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:08:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:08:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100900 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:09:00 compute-0 podman[258439]: 2025-12-05 10:09:00.41148948 +0000 UTC m=+0.076819642 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd)
Dec 05 10:09:00 compute-0 podman[258438]: 2025-12-05 10:09:00.429411489 +0000 UTC m=+0.095524122 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:09:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:00.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:00 compute-0 ceph-mon[74418]: pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:09:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:09:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:01.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:09:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:02.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:09:02 compute-0 ceph-mon[74418]: pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:09:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:09:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:03 compute-0 podman[258477]: 2025-12-05 10:09:03.442046735 +0000 UTC m=+0.098878674 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 05 10:09:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:03.594Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:03.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:04.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:04 compute-0 ceph-mon[74418]: pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:09:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Dec 05 10:09:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:05] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:09:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:05] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:09:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 10:09:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 10:09:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:06.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:09:06 compute-0 ceph-mon[74418]: pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Dec 05 10:09:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:07.299Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:09:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:07.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:08.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:08 compute-0 ceph-mon[74418]: pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:09:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:09:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:09:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:09:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:10 compute-0 sudo[258511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:09:10 compute-0 sudo[258511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:10 compute-0 sudo[258511]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:10 compute-0 ceph-mon[74418]: pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:09:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:09:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:11.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:12.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:09:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:09:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:09:12 compute-0 ceph-mon[74418]: pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:09:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:09:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:09:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb40014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:13.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:13.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:14.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:15 compute-0 ceph-mon[74418]: pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:09:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:09:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:15] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:09:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:15] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:09:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:15.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 05 10:09:15 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1575246236' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 05 10:09:15 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.15048 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 10:09:15 compute-0 ceph-mgr[74711]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 10:09:15 compute-0 ceph-mgr[74711]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 10:09:15 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.15048 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec 05 10:09:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:15 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.24605 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 10:09:15 compute-0 ceph-mgr[74711]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 10:09:15 compute-0 ceph-mgr[74711]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 10:09:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1575246236' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 05 10:09:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3202582097' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 05 10:09:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:16.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:17 compute-0 ceph-mon[74418]: pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:09:17 compute-0 ceph-mon[74418]: from='client.15048 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 10:09:17 compute-0 ceph-mon[74418]: from='client.15048 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec 05 10:09:17 compute-0 ceph-mon[74418]: from='client.24605 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 10:09:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:17.300Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:09:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:17.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:18.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100918 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:09:19 compute-0 ceph-mon[74418]: pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:09:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:09:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:19.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:20.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:09:20.566 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:09:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:09:20.567 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:09:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:09:20.568 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:09:21 compute-0 ceph-mon[74418]: pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:09:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:09:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:21 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:21.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:21 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:22.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:23 compute-0 ceph-mon[74418]: pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:09:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:09:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:23.596Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:09:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:23.597Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:23.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:24.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:24 compute-0 ceph-mon[74418]: pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:09:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec 05 10:09:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:25] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:09:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:25] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:09:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:25.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:26.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:26 compute-0 ceph-mon[74418]: pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec 05 10:09:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:27.302Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:09:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:09:27
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'backups', '.rgw.root', 'volumes', 'default.rgw.control', '.mgr', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'vms']
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:09:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:09:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:09:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:09:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:27.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:09:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:28.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:29 compute-0 ceph-mon[74418]: pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:09:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:09:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:29.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:30 compute-0 sudo[258558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:09:30 compute-0 sudo[258558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:30 compute-0 sudo[258558]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:30.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:30 compute-0 podman[258582]: 2025-12-05 10:09:30.562121078 +0000 UTC m=+0.071805042 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 10:09:30 compute-0 podman[258583]: 2025-12-05 10:09:30.576604924 +0000 UTC m=+0.075329879 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec 05 10:09:31 compute-0 ceph-mon[74418]: pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:09:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:31.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:32.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:33 compute-0 ceph-mon[74418]: pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:33.598Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:33.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:34 compute-0 podman[258621]: 2025-12-05 10:09:34.455980351 +0000 UTC m=+0.119018044 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:09:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:34.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:34 compute-0 ceph-mon[74418]: pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:09:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:35] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:09:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:35] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:09:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:35.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:36.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:36 compute-0 ceph-mon[74418]: pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:09:36 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.24611 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 10:09:36 compute-0 ceph-mgr[74711]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 10:09:36 compute-0 ceph-mgr[74711]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 10:09:36 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.24617 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 10:09:36 compute-0 ceph-mgr[74711]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 10:09:36 compute-0 ceph-mgr[74711]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 10:09:36 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.24617 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec 05 10:09:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:37.303Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3962188311' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 05 10:09:37 compute-0 ceph-mon[74418]: from='client.24611 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 10:09:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2286973347' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 05 10:09:37 compute-0 ceph-mon[74418]: from='client.24617 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 10:09:37 compute-0 ceph-mon[74418]: from='client.24617 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec 05 10:09:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:37.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:38.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:38 compute-0 ceph-mon[74418]: pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:09:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:39 compute-0 sudo[258653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:09:39 compute-0 sudo[258653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:39 compute-0 sudo[258653]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:39 compute-0 sudo[258678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:09:39 compute-0 sudo[258678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:39.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:40 compute-0 sudo[258678]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:09:40 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:09:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:09:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:09:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:09:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:09:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:09:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:09:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:09:40 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:09:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:09:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:09:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:09:40 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:09:40 compute-0 sudo[258736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:09:40 compute-0 sudo[258736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:40 compute-0 sudo[258736]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:40 compute-0 sudo[258762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:09:40 compute-0 sudo[258762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:40 compute-0 ceph-mon[74418]: pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:09:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:09:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:09:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:09:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:09:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:09:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:09:40 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:09:40 compute-0 podman[258829]: 2025-12-05 10:09:40.854422923 +0000 UTC m=+0.060338080 container create 5178f1b30f1a65ebacceb6a081ae9ea24708dc55bc2e7aa83e42481705ab3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:09:40 compute-0 systemd[1]: Started libpod-conmon-5178f1b30f1a65ebacceb6a081ae9ea24708dc55bc2e7aa83e42481705ab3d2b.scope.
Dec 05 10:09:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:09:40 compute-0 podman[258829]: 2025-12-05 10:09:40.825859102 +0000 UTC m=+0.031774319 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:09:40 compute-0 podman[258829]: 2025-12-05 10:09:40.937141044 +0000 UTC m=+0.143056231 container init 5178f1b30f1a65ebacceb6a081ae9ea24708dc55bc2e7aa83e42481705ab3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sammet, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:09:40 compute-0 podman[258829]: 2025-12-05 10:09:40.946997253 +0000 UTC m=+0.152912420 container start 5178f1b30f1a65ebacceb6a081ae9ea24708dc55bc2e7aa83e42481705ab3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 05 10:09:40 compute-0 podman[258829]: 2025-12-05 10:09:40.951009233 +0000 UTC m=+0.156924450 container attach 5178f1b30f1a65ebacceb6a081ae9ea24708dc55bc2e7aa83e42481705ab3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sammet, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:09:40 compute-0 determined_sammet[258845]: 167 167
Dec 05 10:09:40 compute-0 systemd[1]: libpod-5178f1b30f1a65ebacceb6a081ae9ea24708dc55bc2e7aa83e42481705ab3d2b.scope: Deactivated successfully.
Dec 05 10:09:40 compute-0 podman[258829]: 2025-12-05 10:09:40.954041646 +0000 UTC m=+0.159956813 container died 5178f1b30f1a65ebacceb6a081ae9ea24708dc55bc2e7aa83e42481705ab3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 05 10:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-29b8e0a63963c55b03b3307d7f8e5a786ab4dbd071f6f2f10e0e6270695c7674-merged.mount: Deactivated successfully.
Dec 05 10:09:40 compute-0 podman[258829]: 2025-12-05 10:09:40.998951214 +0000 UTC m=+0.204866341 container remove 5178f1b30f1a65ebacceb6a081ae9ea24708dc55bc2e7aa83e42481705ab3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sammet, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 10:09:41 compute-0 systemd[1]: libpod-conmon-5178f1b30f1a65ebacceb6a081ae9ea24708dc55bc2e7aa83e42481705ab3d2b.scope: Deactivated successfully.
Dec 05 10:09:41 compute-0 podman[258869]: 2025-12-05 10:09:41.226040661 +0000 UTC m=+0.071423794 container create 29bdaa3391f7bb958d5631f49d91153290ee2feb96e7208dac93b3fafb10b84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilson, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:09:41 compute-0 systemd[1]: Started libpod-conmon-29bdaa3391f7bb958d5631f49d91153290ee2feb96e7208dac93b3fafb10b84f.scope.
Dec 05 10:09:41 compute-0 podman[258869]: 2025-12-05 10:09:41.198785295 +0000 UTC m=+0.044168469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:09:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e231455d958ce68c7c7171ae6ffea19ab0b31deda4bf25afc4a4540e9d8ccb60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e231455d958ce68c7c7171ae6ffea19ab0b31deda4bf25afc4a4540e9d8ccb60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e231455d958ce68c7c7171ae6ffea19ab0b31deda4bf25afc4a4540e9d8ccb60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e231455d958ce68c7c7171ae6ffea19ab0b31deda4bf25afc4a4540e9d8ccb60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e231455d958ce68c7c7171ae6ffea19ab0b31deda4bf25afc4a4540e9d8ccb60/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:41 compute-0 podman[258869]: 2025-12-05 10:09:41.34382392 +0000 UTC m=+0.189207073 container init 29bdaa3391f7bb958d5631f49d91153290ee2feb96e7208dac93b3fafb10b84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 05 10:09:41 compute-0 podman[258869]: 2025-12-05 10:09:41.355433377 +0000 UTC m=+0.200816510 container start 29bdaa3391f7bb958d5631f49d91153290ee2feb96e7208dac93b3fafb10b84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 10:09:41 compute-0 podman[258869]: 2025-12-05 10:09:41.359438707 +0000 UTC m=+0.204821840 container attach 29bdaa3391f7bb958d5631f49d91153290ee2feb96e7208dac93b3fafb10b84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 10:09:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:41 compute-0 cool_wilson[258885]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:09:41 compute-0 cool_wilson[258885]: --> All data devices are unavailable
Dec 05 10:09:41 compute-0 systemd[1]: libpod-29bdaa3391f7bb958d5631f49d91153290ee2feb96e7208dac93b3fafb10b84f.scope: Deactivated successfully.
Dec 05 10:09:41 compute-0 podman[258869]: 2025-12-05 10:09:41.760450728 +0000 UTC m=+0.605833871 container died 29bdaa3391f7bb958d5631f49d91153290ee2feb96e7208dac93b3fafb10b84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilson, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 10:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e231455d958ce68c7c7171ae6ffea19ab0b31deda4bf25afc4a4540e9d8ccb60-merged.mount: Deactivated successfully.
Dec 05 10:09:41 compute-0 podman[258869]: 2025-12-05 10:09:41.81758913 +0000 UTC m=+0.662972253 container remove 29bdaa3391f7bb958d5631f49d91153290ee2feb96e7208dac93b3fafb10b84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:09:41 compute-0 systemd[1]: libpod-conmon-29bdaa3391f7bb958d5631f49d91153290ee2feb96e7208dac93b3fafb10b84f.scope: Deactivated successfully.
Dec 05 10:09:41 compute-0 sudo[258762]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:41.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0001550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:41 compute-0 sudo[258912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:09:41 compute-0 sudo[258912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:41 compute-0 sudo[258912]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:41 compute-0 sudo[258937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:09:42 compute-0 sudo[258937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:42 compute-0 podman[259004]: 2025-12-05 10:09:42.42417873 +0000 UTC m=+0.053492233 container create fc3067b17f7542a8102908ae00355504d719b27fcd7c02b5bddcd12bfb91cde6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 10:09:42 compute-0 systemd[1]: Started libpod-conmon-fc3067b17f7542a8102908ae00355504d719b27fcd7c02b5bddcd12bfb91cde6.scope.
Dec 05 10:09:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:42 compute-0 podman[259004]: 2025-12-05 10:09:42.39893676 +0000 UTC m=+0.028250303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:09:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:09:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:42 compute-0 podman[259004]: 2025-12-05 10:09:42.52151137 +0000 UTC m=+0.150824953 container init fc3067b17f7542a8102908ae00355504d719b27fcd7c02b5bddcd12bfb91cde6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_tharp, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:09:42 compute-0 podman[259004]: 2025-12-05 10:09:42.532228353 +0000 UTC m=+0.161541896 container start fc3067b17f7542a8102908ae00355504d719b27fcd7c02b5bddcd12bfb91cde6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 10:09:42 compute-0 podman[259004]: 2025-12-05 10:09:42.537579619 +0000 UTC m=+0.166893152 container attach fc3067b17f7542a8102908ae00355504d719b27fcd7c02b5bddcd12bfb91cde6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_tharp, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 10:09:42 compute-0 elastic_tharp[259021]: 167 167
Dec 05 10:09:42 compute-0 systemd[1]: libpod-fc3067b17f7542a8102908ae00355504d719b27fcd7c02b5bddcd12bfb91cde6.scope: Deactivated successfully.
Dec 05 10:09:42 compute-0 podman[259004]: 2025-12-05 10:09:42.542498954 +0000 UTC m=+0.171812527 container died fc3067b17f7542a8102908ae00355504d719b27fcd7c02b5bddcd12bfb91cde6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_tharp, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 10:09:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:42.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:09:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:09:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e095fd262083f16c394fd5bf91f77602cbd57424c34a1ccfdc71b4c3ff606c83-merged.mount: Deactivated successfully.
Dec 05 10:09:42 compute-0 podman[259004]: 2025-12-05 10:09:42.593869688 +0000 UTC m=+0.223183211 container remove fc3067b17f7542a8102908ae00355504d719b27fcd7c02b5bddcd12bfb91cde6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_tharp, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:09:42 compute-0 systemd[1]: libpod-conmon-fc3067b17f7542a8102908ae00355504d719b27fcd7c02b5bddcd12bfb91cde6.scope: Deactivated successfully.
Dec 05 10:09:42 compute-0 podman[259046]: 2025-12-05 10:09:42.789479115 +0000 UTC m=+0.054710226 container create 5524671270bf77f114591e35e789e8207fbcea5cd7bab72cd1fa5ef5b3cae10b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:09:42 compute-0 systemd[1]: Started libpod-conmon-5524671270bf77f114591e35e789e8207fbcea5cd7bab72cd1fa5ef5b3cae10b.scope.
Dec 05 10:09:42 compute-0 ceph-mon[74418]: pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:09:42 compute-0 podman[259046]: 2025-12-05 10:09:42.766904418 +0000 UTC m=+0.032135589 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:09:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d6758d2071aa7639c296a76d4187edfe7f1a7dc8a3a23263f95635f39e9393/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d6758d2071aa7639c296a76d4187edfe7f1a7dc8a3a23263f95635f39e9393/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d6758d2071aa7639c296a76d4187edfe7f1a7dc8a3a23263f95635f39e9393/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d6758d2071aa7639c296a76d4187edfe7f1a7dc8a3a23263f95635f39e9393/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:42 compute-0 podman[259046]: 2025-12-05 10:09:42.886518647 +0000 UTC m=+0.151749738 container init 5524671270bf77f114591e35e789e8207fbcea5cd7bab72cd1fa5ef5b3cae10b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 05 10:09:42 compute-0 podman[259046]: 2025-12-05 10:09:42.898536956 +0000 UTC m=+0.163768037 container start 5524671270bf77f114591e35e789e8207fbcea5cd7bab72cd1fa5ef5b3cae10b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_rhodes, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:09:42 compute-0 podman[259046]: 2025-12-05 10:09:42.902765561 +0000 UTC m=+0.167996652 container attach 5524671270bf77f114591e35e789e8207fbcea5cd7bab72cd1fa5ef5b3cae10b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_rhodes, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]: {
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:     "1": [
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:         {
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             "devices": [
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "/dev/loop3"
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             ],
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             "lv_name": "ceph_lv0",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             "lv_size": "21470642176",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             "name": "ceph_lv0",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             "tags": {
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.cluster_name": "ceph",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.crush_device_class": "",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.encrypted": "0",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.osd_id": "1",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.type": "block",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.vdo": "0",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:                 "ceph.with_tpm": "0"
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             },
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             "type": "block",
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:             "vg_name": "ceph_vg0"
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:         }
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]:     ]
Dec 05 10:09:43 compute-0 wizardly_rhodes[259062]: }
Dec 05 10:09:43 compute-0 systemd[1]: libpod-5524671270bf77f114591e35e789e8207fbcea5cd7bab72cd1fa5ef5b3cae10b.scope: Deactivated successfully.
Dec 05 10:09:43 compute-0 podman[259046]: 2025-12-05 10:09:43.245583432 +0000 UTC m=+0.510814533 container died 5524671270bf77f114591e35e789e8207fbcea5cd7bab72cd1fa5ef5b3cae10b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-70d6758d2071aa7639c296a76d4187edfe7f1a7dc8a3a23263f95635f39e9393-merged.mount: Deactivated successfully.
Dec 05 10:09:43 compute-0 podman[259046]: 2025-12-05 10:09:43.294302953 +0000 UTC m=+0.559534034 container remove 5524671270bf77f114591e35e789e8207fbcea5cd7bab72cd1fa5ef5b3cae10b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:09:43 compute-0 systemd[1]: libpod-conmon-5524671270bf77f114591e35e789e8207fbcea5cd7bab72cd1fa5ef5b3cae10b.scope: Deactivated successfully.
Dec 05 10:09:43 compute-0 sudo[258937]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:43 compute-0 sudo[259083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:09:43 compute-0 sudo[259083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:43 compute-0 sudo[259083]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:43.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:09:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:43.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:09:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:43.601Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:43 compute-0 sudo[259108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:09:43 compute-0 sudo[259108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:43.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:44 compute-0 podman[259174]: 2025-12-05 10:09:44.011387253 +0000 UTC m=+0.045028242 container create 5c2559eb3437bde9ccbaff935906edbbdd90b46e5413f560607e2a678e745b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:09:44 compute-0 podman[259174]: 2025-12-05 10:09:43.988623181 +0000 UTC m=+0.022264150 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:09:44 compute-0 systemd[1]: Started libpod-conmon-5c2559eb3437bde9ccbaff935906edbbdd90b46e5413f560607e2a678e745b65.scope.
Dec 05 10:09:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:09:44 compute-0 podman[259174]: 2025-12-05 10:09:44.155374089 +0000 UTC m=+0.189015098 container init 5c2559eb3437bde9ccbaff935906edbbdd90b46e5413f560607e2a678e745b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:09:44 compute-0 podman[259174]: 2025-12-05 10:09:44.164025876 +0000 UTC m=+0.197666845 container start 5c2559eb3437bde9ccbaff935906edbbdd90b46e5413f560607e2a678e745b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:09:44 compute-0 podman[259174]: 2025-12-05 10:09:44.168738714 +0000 UTC m=+0.202379723 container attach 5c2559eb3437bde9ccbaff935906edbbdd90b46e5413f560607e2a678e745b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:09:44 compute-0 elastic_sammet[259191]: 167 167
Dec 05 10:09:44 compute-0 systemd[1]: libpod-5c2559eb3437bde9ccbaff935906edbbdd90b46e5413f560607e2a678e745b65.scope: Deactivated successfully.
Dec 05 10:09:44 compute-0 podman[259174]: 2025-12-05 10:09:44.172991141 +0000 UTC m=+0.206632120 container died 5c2559eb3437bde9ccbaff935906edbbdd90b46e5413f560607e2a678e745b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:09:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea42a6e878e4a08109b381f0c341014b2bf97a36f893ca5d848e90844d067961-merged.mount: Deactivated successfully.
Dec 05 10:09:44 compute-0 podman[259174]: 2025-12-05 10:09:44.227190932 +0000 UTC m=+0.260831921 container remove 5c2559eb3437bde9ccbaff935906edbbdd90b46e5413f560607e2a678e745b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 10:09:44 compute-0 systemd[1]: libpod-conmon-5c2559eb3437bde9ccbaff935906edbbdd90b46e5413f560607e2a678e745b65.scope: Deactivated successfully.
Dec 05 10:09:44 compute-0 podman[259216]: 2025-12-05 10:09:44.478996955 +0000 UTC m=+0.086210178 container create 7a261c58b588b11b02bf49e4dbe8fc6e643cd92ee2623449b4b18e637cdef17b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 10:09:44 compute-0 podman[259216]: 2025-12-05 10:09:44.424238138 +0000 UTC m=+0.031451421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:09:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc80026c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:09:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:44.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:09:44 compute-0 systemd[1]: Started libpod-conmon-7a261c58b588b11b02bf49e4dbe8fc6e643cd92ee2623449b4b18e637cdef17b.scope.
Dec 05 10:09:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f924c327648743753d198aff0cdf20db2ad3ac6910bf884798a04232e8909eac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f924c327648743753d198aff0cdf20db2ad3ac6910bf884798a04232e8909eac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f924c327648743753d198aff0cdf20db2ad3ac6910bf884798a04232e8909eac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f924c327648743753d198aff0cdf20db2ad3ac6910bf884798a04232e8909eac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:44 compute-0 podman[259216]: 2025-12-05 10:09:44.662879931 +0000 UTC m=+0.270093174 container init 7a261c58b588b11b02bf49e4dbe8fc6e643cd92ee2623449b4b18e637cdef17b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:09:44 compute-0 podman[259216]: 2025-12-05 10:09:44.669241655 +0000 UTC m=+0.276454868 container start 7a261c58b588b11b02bf49e4dbe8fc6e643cd92ee2623449b4b18e637cdef17b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 10:09:44 compute-0 podman[259216]: 2025-12-05 10:09:44.672914095 +0000 UTC m=+0.280127338 container attach 7a261c58b588b11b02bf49e4dbe8fc6e643cd92ee2623449b4b18e637cdef17b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:09:44 compute-0 ceph-mon[74418]: pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:45 compute-0 lvm[259307]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:09:45 compute-0 lvm[259307]: VG ceph_vg0 finished
Dec 05 10:09:45 compute-0 pensive_hodgkin[259233]: {}
Dec 05 10:09:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:09:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:45 compute-0 systemd[1]: libpod-7a261c58b588b11b02bf49e4dbe8fc6e643cd92ee2623449b4b18e637cdef17b.scope: Deactivated successfully.
Dec 05 10:09:45 compute-0 systemd[1]: libpod-7a261c58b588b11b02bf49e4dbe8fc6e643cd92ee2623449b4b18e637cdef17b.scope: Consumed 1.289s CPU time.
Dec 05 10:09:45 compute-0 podman[259216]: 2025-12-05 10:09:45.448296459 +0000 UTC m=+1.055509682 container died 7a261c58b588b11b02bf49e4dbe8fc6e643cd92ee2623449b4b18e637cdef17b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:09:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f924c327648743753d198aff0cdf20db2ad3ac6910bf884798a04232e8909eac-merged.mount: Deactivated successfully.
Dec 05 10:09:45 compute-0 podman[259216]: 2025-12-05 10:09:45.502859651 +0000 UTC m=+1.110072884 container remove 7a261c58b588b11b02bf49e4dbe8fc6e643cd92ee2623449b4b18e637cdef17b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 10:09:45 compute-0 systemd[1]: libpod-conmon-7a261c58b588b11b02bf49e4dbe8fc6e643cd92ee2623449b4b18e637cdef17b.scope: Deactivated successfully.
Dec 05 10:09:45 compute-0 sudo[259108]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:09:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:09:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:09:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:45] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:09:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:45] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:09:45 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:09:45 compute-0 sudo[259321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:09:45 compute-0 sudo[259321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:45 compute-0 sudo[259321]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:45.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:46 compute-0 ceph-mon[74418]: pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:09:46 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:09:46 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:09:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:46.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:47.304Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:47.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:48 compute-0 ceph-mon[74418]: pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:09:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:49.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc80026c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/100950 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:09:50 compute-0 sudo[259352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:09:50 compute-0 sudo[259352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:09:50 compute-0 sudo[259352]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:50.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:50 compute-0 ceph-mon[74418]: pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:09:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:51.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:52.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:52 compute-0 ceph-mon[74418]: pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:53.602Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:53 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3646026835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:09:53 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2585992169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:09:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:53.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.268 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.269 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.290 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.290 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.291 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.291 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.317 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.318 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.318 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.318 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.319 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:09:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:54.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:54 compute-0 ceph-mon[74418]: pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:09:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2163188957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:09:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/282728949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:09:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:09:54 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/741154037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:09:54 compute-0 nova_compute[257087]: 2025-12-05 10:09:54.799 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.013 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.015 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4876MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.015 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.015 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.280 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.280 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.374 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:09:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:09:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:55] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:09:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:09:55] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:09:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/741154037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:09:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:09:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864548976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.862 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.871 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:09:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:09:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:55.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.894 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.897 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:09:55 compute-0 nova_compute[257087]: 2025-12-05 10:09:55.898 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:09:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:56 compute-0 nova_compute[257087]: 2025-12-05 10:09:56.136 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:56 compute-0 nova_compute[257087]: 2025-12-05 10:09:56.137 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:09:56 compute-0 nova_compute[257087]: 2025-12-05 10:09:56.138 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:09:56 compute-0 nova_compute[257087]: 2025-12-05 10:09:56.158 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:09:56 compute-0 nova_compute[257087]: 2025-12-05 10:09:56.159 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:56 compute-0 nova_compute[257087]: 2025-12-05 10:09:56.160 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:56 compute-0 nova_compute[257087]: 2025-12-05 10:09:56.161 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:56 compute-0 nova_compute[257087]: 2025-12-05 10:09:56.161 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:09:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 10:09:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:56.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 10:09:56 compute-0 ceph-mon[74418]: pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:09:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1864548976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:09:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:09:57.305Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:09:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:09:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:09:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:09:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:09:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:09:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:09:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:09:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:09:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:09:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:09:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1183604219' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:09:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1183604219' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:09:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:09:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:57.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:09:58.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:58 compute-0 ceph-mon[74418]: pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:09:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:09:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:09:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:09:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:09:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:09:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:09:59.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:09:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:09:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:10:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:10:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.0 observed slow operation indications in BlueStore
Dec 05 10:10:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Dec 05 10:10:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:00.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:00 compute-0 ceph-mon[74418]: pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:10:00 compute-0 ceph-mon[74418]: Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:10:00 compute-0 ceph-mon[74418]: [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:10:00 compute-0 ceph-mon[74418]:      osd.0 observed slow operation indications in BlueStore
Dec 05 10:10:00 compute-0 ceph-mon[74418]:      osd.1 observed slow operation indications in BlueStore
Dec 05 10:10:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:10:01 compute-0 podman[259432]: 2025-12-05 10:10:01.446927505 +0000 UTC m=+0.091524973 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 10:10:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:01 compute-0 podman[259433]: 2025-12-05 10:10:01.463443787 +0000 UTC m=+0.108036775 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:10:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:01.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:10:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:10:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:02.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:02 compute-0 ceph-mon[74418]: pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:10:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:10:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:03.603Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:10:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:03.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:04.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:04 compute-0 ceph-mon[74418]: pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:10:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:10:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:10:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:05 compute-0 podman[259474]: 2025-12-05 10:10:05.495494357 +0000 UTC m=+0.135853954 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:05] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:10:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:05] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:10:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:05.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:06.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:06 compute-0 ceph-mon[74418]: pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:10:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:07.306Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:10:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:10:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:07.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:08.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:09 compute-0 ceph-mon[74418]: pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:10:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:10:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:09.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:10.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:10 compute-0 sudo[259508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:10:10 compute-0 sudo[259508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:10 compute-0 sudo[259508]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:11 compute-0 ceph-mon[74418]: pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:10:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:10:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:11.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101012 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:10:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:10:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:10:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:12.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:13 compute-0 ceph-mon[74418]: pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:10:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:10:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:10:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:13.604Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:10:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:13.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:14.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:15 compute-0 ceph-mon[74418]: pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:10:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:10:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:15] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec 05 10:10:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:15] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec 05 10:10:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:15.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:16.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:17.308Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:10:17 compute-0 ceph-mon[74418]: pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:10:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:10:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:17.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:18.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:18 compute-0 ceph-mon[74418]: pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:10:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:10:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:19.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:10:20.567 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:10:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:10:20.569 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:10:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:10:20.569 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:10:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:20.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:21 compute-0 ceph-mon[74418]: pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:10:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:21 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cc8001e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ad9c5d0 =====
Dec 05 10:10:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb0003240 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:22.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ad9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:22 compute-0 radosgw[95374]: beast: 0x7f134ad9c5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:22.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:22 compute-0 ceph-mon[74418]: pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:23.606Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:10:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:23.606Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:10:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:23.606Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:10:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0001550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:23 compute-0 ceph-mon[74418]: pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ad9c5d0 =====
Dec 05 10:10:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ad9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:24 compute-0 radosgw[95374]: beast: 0x7f134ad9c5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:24.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:24.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:25] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec 05 10:10:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:25] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec 05 10:10:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00016f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:26 compute-0 ceph-mon[74418]: pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ad9c5d0 =====
Dec 05 10:10:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ad9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:26 compute-0 radosgw[95374]: beast: 0x7f134ad9c5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:26.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:26.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:27.309Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:10:27
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.nfs', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'images', '.rgw.root']
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:10:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:10:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:10:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:10:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:10:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:28.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ad9c5d0 =====
Dec 05 10:10:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ad9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:28 compute-0 radosgw[95374]: beast: 0x7f134ad9c5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:28.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:29 compute-0 ceph-mon[74418]: pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:29 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:10:29.018 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:29 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:10:29.020 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:10:29 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:10:29.021 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:10:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:10:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:29 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Check health
Dec 05 10:10:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:30 compute-0 ceph-mon[74418]: pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:10:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:30 compute-0 sudo[259557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:10:30 compute-0 sudo[259557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:30 compute-0 sudo[259557]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:30.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ad9c5d0 =====
Dec 05 10:10:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ad9c5d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 05 10:10:30 compute-0 radosgw[95374]: beast: 0x7f134ad9c5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:30.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 05 10:10:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00016f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:32 compute-0 podman[259584]: 2025-12-05 10:10:32.404477135 +0000 UTC m=+0.064024134 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 05 10:10:32 compute-0 podman[259583]: 2025-12-05 10:10:32.414693453 +0000 UTC m=+0.076822823 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 10:10:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:32.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:32.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:33 compute-0 ceph-mon[74418]: pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:33.607Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:10:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:33.608Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:10:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:34 compute-0 ceph-mon[74418]: pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:34.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:34.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:35] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:10:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:35] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:10:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:36 compute-0 podman[259625]: 2025-12-05 10:10:36.460415111 +0000 UTC m=+0.123981917 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 05 10:10:36 compute-0 ceph-mon[74418]: pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:36.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ad9c5d0 =====
Dec 05 10:10:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ad9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:36 compute-0 radosgw[95374]: beast: 0x7f134ad9c5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:36.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:37.310Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:10:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:38 compute-0 ceph-mon[74418]: pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ad9c5d0 =====
Dec 05 10:10:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ad9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:38 compute-0 radosgw[95374]: beast: 0x7f134ad9c5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:38.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:38.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:10:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:40 compute-0 ceph-mon[74418]: pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:10:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ad9c5d0 =====
Dec 05 10:10:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:40.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ad9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:40 compute-0 radosgw[95374]: beast: 0x7f134ad9c5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:40.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:10:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:10:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:42 compute-0 ceph-mon[74418]: pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:10:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ad9c5d0 =====
Dec 05 10:10:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ad9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:42 compute-0 radosgw[95374]: beast: 0x7f134ad9c5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:42.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:42.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:43.609Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:10:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:44 compute-0 ceph-mon[74418]: pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:44.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ad9c5d0 =====
Dec 05 10:10:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ad9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:44 compute-0 radosgw[95374]: beast: 0x7f134ad9c5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:44.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:45] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:10:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:45] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:10:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:46 compute-0 sudo[259660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:10:46 compute-0 sudo[259660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:46 compute-0 sudo[259660]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:46 compute-0 sudo[259685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:10:46 compute-0 sudo[259685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:46 compute-0 sudo[259685]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:46 compute-0 ceph-mon[74418]: pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:10:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:10:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:10:46 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:10:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:10:46 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:10:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:10:46 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:10:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:10:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:10:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:10:46 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:10:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:10:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:10:46 compute-0 sudo[259744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:10:46 compute-0 sudo[259744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:46 compute-0 sudo[259744]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:46.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:46.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:47 compute-0 sudo[259769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:10:47 compute-0 sudo[259769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:47.312Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:10:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:47.312Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:10:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:47 compute-0 podman[259834]: 2025-12-05 10:10:47.488282984 +0000 UTC m=+0.051853452 container create 7f7af7bb17b8fbcc258b3da81fc6e9125cc1bf1650d96ffa4203717bba9212c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:10:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:47 compute-0 systemd[1]: Started libpod-conmon-7f7af7bb17b8fbcc258b3da81fc6e9125cc1bf1650d96ffa4203717bba9212c8.scope.
Dec 05 10:10:47 compute-0 podman[259834]: 2025-12-05 10:10:47.466666576 +0000 UTC m=+0.030237054 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:10:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:10:47 compute-0 podman[259834]: 2025-12-05 10:10:47.597065815 +0000 UTC m=+0.160636363 container init 7f7af7bb17b8fbcc258b3da81fc6e9125cc1bf1650d96ffa4203717bba9212c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 05 10:10:47 compute-0 podman[259834]: 2025-12-05 10:10:47.60569764 +0000 UTC m=+0.169268138 container start 7f7af7bb17b8fbcc258b3da81fc6e9125cc1bf1650d96ffa4203717bba9212c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_curie, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:10:47 compute-0 podman[259834]: 2025-12-05 10:10:47.610115271 +0000 UTC m=+0.173685779 container attach 7f7af7bb17b8fbcc258b3da81fc6e9125cc1bf1650d96ffa4203717bba9212c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 10:10:47 compute-0 zealous_curie[259850]: 167 167
Dec 05 10:10:47 compute-0 systemd[1]: libpod-7f7af7bb17b8fbcc258b3da81fc6e9125cc1bf1650d96ffa4203717bba9212c8.scope: Deactivated successfully.
Dec 05 10:10:47 compute-0 conmon[259850]: conmon 7f7af7bb17b8fbcc258b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f7af7bb17b8fbcc258b3da81fc6e9125cc1bf1650d96ffa4203717bba9212c8.scope/container/memory.events
Dec 05 10:10:47 compute-0 podman[259834]: 2025-12-05 10:10:47.613791531 +0000 UTC m=+0.177361989 container died 7f7af7bb17b8fbcc258b3da81fc6e9125cc1bf1650d96ffa4203717bba9212c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0277f93244a25ce44a6a530ca0fe325ffbdc552cd7fb4c01bf96cab79f468258-merged.mount: Deactivated successfully.
Dec 05 10:10:47 compute-0 podman[259834]: 2025-12-05 10:10:47.668543261 +0000 UTC m=+0.232113719 container remove 7f7af7bb17b8fbcc258b3da81fc6e9125cc1bf1650d96ffa4203717bba9212c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_curie, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 10:10:47 compute-0 systemd[1]: libpod-conmon-7f7af7bb17b8fbcc258b3da81fc6e9125cc1bf1650d96ffa4203717bba9212c8.scope: Deactivated successfully.
Dec 05 10:10:47 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:10:47 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:10:47 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:10:47 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:10:47 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:10:47 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:10:47 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:10:47 compute-0 podman[259875]: 2025-12-05 10:10:47.931657444 +0000 UTC m=+0.071650112 container create f35d30cd022d2ec88417c7ab31d05f1c0296bcbee5583e3b33a9fe8f707ac1a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:10:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:47 compute-0 podman[259875]: 2025-12-05 10:10:47.904770321 +0000 UTC m=+0.044763069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:10:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:48 compute-0 systemd[1]: Started libpod-conmon-f35d30cd022d2ec88417c7ab31d05f1c0296bcbee5583e3b33a9fe8f707ac1a6.scope.
Dec 05 10:10:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db7d1021ff6333d3a0d57699a1ab503a2e45b980c68eb9ac99ecd19089d6688/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db7d1021ff6333d3a0d57699a1ab503a2e45b980c68eb9ac99ecd19089d6688/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db7d1021ff6333d3a0d57699a1ab503a2e45b980c68eb9ac99ecd19089d6688/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db7d1021ff6333d3a0d57699a1ab503a2e45b980c68eb9ac99ecd19089d6688/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db7d1021ff6333d3a0d57699a1ab503a2e45b980c68eb9ac99ecd19089d6688/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:48 compute-0 podman[259875]: 2025-12-05 10:10:48.080809513 +0000 UTC m=+0.220802241 container init f35d30cd022d2ec88417c7ab31d05f1c0296bcbee5583e3b33a9fe8f707ac1a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 10:10:48 compute-0 podman[259875]: 2025-12-05 10:10:48.091987058 +0000 UTC m=+0.231979716 container start f35d30cd022d2ec88417c7ab31d05f1c0296bcbee5583e3b33a9fe8f707ac1a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 05 10:10:48 compute-0 podman[259875]: 2025-12-05 10:10:48.096452248 +0000 UTC m=+0.236444946 container attach f35d30cd022d2ec88417c7ab31d05f1c0296bcbee5583e3b33a9fe8f707ac1a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Dec 05 10:10:48 compute-0 jovial_elion[259891]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:10:48 compute-0 jovial_elion[259891]: --> All data devices are unavailable
Dec 05 10:10:48 compute-0 systemd[1]: libpod-f35d30cd022d2ec88417c7ab31d05f1c0296bcbee5583e3b33a9fe8f707ac1a6.scope: Deactivated successfully.
Dec 05 10:10:48 compute-0 podman[259875]: 2025-12-05 10:10:48.488079809 +0000 UTC m=+0.628072487 container died f35d30cd022d2ec88417c7ab31d05f1c0296bcbee5583e3b33a9fe8f707ac1a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 10:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0db7d1021ff6333d3a0d57699a1ab503a2e45b980c68eb9ac99ecd19089d6688-merged.mount: Deactivated successfully.
Dec 05 10:10:48 compute-0 podman[259875]: 2025-12-05 10:10:48.534738229 +0000 UTC m=+0.674730887 container remove f35d30cd022d2ec88417c7ab31d05f1c0296bcbee5583e3b33a9fe8f707ac1a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:10:48 compute-0 systemd[1]: libpod-conmon-f35d30cd022d2ec88417c7ab31d05f1c0296bcbee5583e3b33a9fe8f707ac1a6.scope: Deactivated successfully.
Dec 05 10:10:48 compute-0 sudo[259769]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:48 compute-0 sudo[259920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:10:48 compute-0 sudo[259920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:48 compute-0 sudo[259920]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:48 compute-0 sudo[259945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:10:48 compute-0 sudo[259945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:48 compute-0 ceph-mon[74418]: pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:48.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:48.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:49 compute-0 podman[260011]: 2025-12-05 10:10:49.270726654 +0000 UTC m=+0.062373000 container create 446e0f50d8956bb9220d8563386e307532d222a2c2967d0ed41f206336e4fcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 10:10:49 compute-0 systemd[1]: Started libpod-conmon-446e0f50d8956bb9220d8563386e307532d222a2c2967d0ed41f206336e4fcc9.scope.
Dec 05 10:10:49 compute-0 podman[260011]: 2025-12-05 10:10:49.2496774 +0000 UTC m=+0.041323826 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:10:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:10:49 compute-0 podman[260011]: 2025-12-05 10:10:49.367835707 +0000 UTC m=+0.159482113 container init 446e0f50d8956bb9220d8563386e307532d222a2c2967d0ed41f206336e4fcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:10:49 compute-0 podman[260011]: 2025-12-05 10:10:49.380762859 +0000 UTC m=+0.172409235 container start 446e0f50d8956bb9220d8563386e307532d222a2c2967d0ed41f206336e4fcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 10:10:49 compute-0 podman[260011]: 2025-12-05 10:10:49.385260871 +0000 UTC m=+0.176907257 container attach 446e0f50d8956bb9220d8563386e307532d222a2c2967d0ed41f206336e4fcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:10:49 compute-0 loving_lalande[260027]: 167 167
Dec 05 10:10:49 compute-0 systemd[1]: libpod-446e0f50d8956bb9220d8563386e307532d222a2c2967d0ed41f206336e4fcc9.scope: Deactivated successfully.
Dec 05 10:10:49 compute-0 podman[260011]: 2025-12-05 10:10:49.387510222 +0000 UTC m=+0.179156588 container died 446e0f50d8956bb9220d8563386e307532d222a2c2967d0ed41f206336e4fcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Dec 05 10:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-af6fa8716779cd850fb03310544a67054d80c63ffeed59d5bafcdd7ab914f87f-merged.mount: Deactivated successfully.
Dec 05 10:10:49 compute-0 podman[260011]: 2025-12-05 10:10:49.433410562 +0000 UTC m=+0.225056918 container remove 446e0f50d8956bb9220d8563386e307532d222a2c2967d0ed41f206336e4fcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_lalande, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:10:49 compute-0 systemd[1]: libpod-conmon-446e0f50d8956bb9220d8563386e307532d222a2c2967d0ed41f206336e4fcc9.scope: Deactivated successfully.
Dec 05 10:10:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:10:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:49 compute-0 podman[260051]: 2025-12-05 10:10:49.5935298 +0000 UTC m=+0.040137464 container create 08aad95aef85ff6fc5b8cac090991f8e6c63e837c2bedf8e3e67691ac1233fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 10:10:49 compute-0 systemd[1]: Started libpod-conmon-08aad95aef85ff6fc5b8cac090991f8e6c63e837c2bedf8e3e67691ac1233fd8.scope.
Dec 05 10:10:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8806b75edc50b7ba56e41bf93d5b2a1d11401fc3791578411da65123101854bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8806b75edc50b7ba56e41bf93d5b2a1d11401fc3791578411da65123101854bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8806b75edc50b7ba56e41bf93d5b2a1d11401fc3791578411da65123101854bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8806b75edc50b7ba56e41bf93d5b2a1d11401fc3791578411da65123101854bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:49 compute-0 podman[260051]: 2025-12-05 10:10:49.574357578 +0000 UTC m=+0.020965272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:10:49 compute-0 podman[260051]: 2025-12-05 10:10:49.672354196 +0000 UTC m=+0.118961900 container init 08aad95aef85ff6fc5b8cac090991f8e6c63e837c2bedf8e3e67691ac1233fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 05 10:10:49 compute-0 podman[260051]: 2025-12-05 10:10:49.682671537 +0000 UTC m=+0.129279241 container start 08aad95aef85ff6fc5b8cac090991f8e6c63e837c2bedf8e3e67691ac1233fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_bartik, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:10:49 compute-0 podman[260051]: 2025-12-05 10:10:49.688079425 +0000 UTC m=+0.134687139 container attach 08aad95aef85ff6fc5b8cac090991f8e6c63e837c2bedf8e3e67691ac1233fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:10:49 compute-0 nice_bartik[260068]: {
Dec 05 10:10:49 compute-0 nice_bartik[260068]:     "1": [
Dec 05 10:10:49 compute-0 nice_bartik[260068]:         {
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             "devices": [
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "/dev/loop3"
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             ],
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             "lv_name": "ceph_lv0",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             "lv_size": "21470642176",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             "name": "ceph_lv0",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             "tags": {
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.cluster_name": "ceph",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.crush_device_class": "",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.encrypted": "0",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.osd_id": "1",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.type": "block",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.vdo": "0",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:                 "ceph.with_tpm": "0"
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             },
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             "type": "block",
Dec 05 10:10:49 compute-0 nice_bartik[260068]:             "vg_name": "ceph_vg0"
Dec 05 10:10:49 compute-0 nice_bartik[260068]:         }
Dec 05 10:10:49 compute-0 nice_bartik[260068]:     ]
Dec 05 10:10:49 compute-0 nice_bartik[260068]: }
Dec 05 10:10:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:49 compute-0 systemd[1]: libpod-08aad95aef85ff6fc5b8cac090991f8e6c63e837c2bedf8e3e67691ac1233fd8.scope: Deactivated successfully.
Dec 05 10:10:50 compute-0 podman[260051]: 2025-12-05 10:10:49.999720977 +0000 UTC m=+0.446328651 container died 08aad95aef85ff6fc5b8cac090991f8e6c63e837c2bedf8e3e67691ac1233fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_bartik, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:10:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8806b75edc50b7ba56e41bf93d5b2a1d11401fc3791578411da65123101854bc-merged.mount: Deactivated successfully.
Dec 05 10:10:50 compute-0 podman[260051]: 2025-12-05 10:10:50.053786529 +0000 UTC m=+0.500394193 container remove 08aad95aef85ff6fc5b8cac090991f8e6c63e837c2bedf8e3e67691ac1233fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_bartik, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 05 10:10:50 compute-0 systemd[1]: libpod-conmon-08aad95aef85ff6fc5b8cac090991f8e6c63e837c2bedf8e3e67691ac1233fd8.scope: Deactivated successfully.
Dec 05 10:10:50 compute-0 sudo[259945]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:50 compute-0 sudo[260089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:10:50 compute-0 sudo[260089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:50 compute-0 sudo[260089]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:50 compute-0 sudo[260115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:10:50 compute-0 sudo[260115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:50 compute-0 podman[260183]: 2025-12-05 10:10:50.802819348 +0000 UTC m=+0.071814526 container create 71e6e6a0173d89ee718e6904951bebd4139fed6e9c6f1d4003c54be7c2df3a5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feynman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 10:10:50 compute-0 systemd[1]: Started libpod-conmon-71e6e6a0173d89ee718e6904951bebd4139fed6e9c6f1d4003c54be7c2df3a5b.scope.
Dec 05 10:10:50 compute-0 podman[260183]: 2025-12-05 10:10:50.771826354 +0000 UTC m=+0.040821632 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:10:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:10:50 compute-0 podman[260183]: 2025-12-05 10:10:50.884936903 +0000 UTC m=+0.153932131 container init 71e6e6a0173d89ee718e6904951bebd4139fed6e9c6f1d4003c54be7c2df3a5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feynman, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:10:50 compute-0 podman[260183]: 2025-12-05 10:10:50.892090018 +0000 UTC m=+0.161085206 container start 71e6e6a0173d89ee718e6904951bebd4139fed6e9c6f1d4003c54be7c2df3a5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feynman, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:10:50 compute-0 sudo[260200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:10:50 compute-0 podman[260183]: 2025-12-05 10:10:50.895270335 +0000 UTC m=+0.164265523 container attach 71e6e6a0173d89ee718e6904951bebd4139fed6e9c6f1d4003c54be7c2df3a5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feynman, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 10:10:50 compute-0 laughing_feynman[260204]: 167 167
Dec 05 10:10:50 compute-0 sudo[260200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:50 compute-0 systemd[1]: libpod-71e6e6a0173d89ee718e6904951bebd4139fed6e9c6f1d4003c54be7c2df3a5b.scope: Deactivated successfully.
Dec 05 10:10:50 compute-0 podman[260183]: 2025-12-05 10:10:50.897915646 +0000 UTC m=+0.166910864 container died 71e6e6a0173d89ee718e6904951bebd4139fed6e9c6f1d4003c54be7c2df3a5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feynman, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:10:50 compute-0 sudo[260200]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-379f67c8a8775fbc63b4077c3020079123923578d621a9359a2f7586debb8e1c-merged.mount: Deactivated successfully.
Dec 05 10:10:50 compute-0 podman[260183]: 2025-12-05 10:10:50.949033738 +0000 UTC m=+0.218028946 container remove 71e6e6a0173d89ee718e6904951bebd4139fed6e9c6f1d4003c54be7c2df3a5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feynman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:10:50 compute-0 ceph-mon[74418]: pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:10:50 compute-0 systemd[1]: libpod-conmon-71e6e6a0173d89ee718e6904951bebd4139fed6e9c6f1d4003c54be7c2df3a5b.scope: Deactivated successfully.
Dec 05 10:10:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:10:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:50.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:10:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:10:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:50.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:10:51 compute-0 podman[260252]: 2025-12-05 10:10:51.183434828 +0000 UTC m=+0.071303961 container create ae88e28e0fc1271d2a8c7c14d668e167e18dbdff5ef26458e8b1a2a204c1060f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feistel, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 10:10:51 compute-0 systemd[1]: Started libpod-conmon-ae88e28e0fc1271d2a8c7c14d668e167e18dbdff5ef26458e8b1a2a204c1060f.scope.
Dec 05 10:10:51 compute-0 podman[260252]: 2025-12-05 10:10:51.155743814 +0000 UTC m=+0.043612987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:10:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c442decfc38bda4a985281a3880704f7a576c8b1e9485663560fedac197065/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c442decfc38bda4a985281a3880704f7a576c8b1e9485663560fedac197065/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c442decfc38bda4a985281a3880704f7a576c8b1e9485663560fedac197065/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c442decfc38bda4a985281a3880704f7a576c8b1e9485663560fedac197065/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:51 compute-0 podman[260252]: 2025-12-05 10:10:51.276724468 +0000 UTC m=+0.164593581 container init ae88e28e0fc1271d2a8c7c14d668e167e18dbdff5ef26458e8b1a2a204c1060f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feistel, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:10:51 compute-0 podman[260252]: 2025-12-05 10:10:51.284796648 +0000 UTC m=+0.172665751 container start ae88e28e0fc1271d2a8c7c14d668e167e18dbdff5ef26458e8b1a2a204c1060f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feistel, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 10:10:51 compute-0 podman[260252]: 2025-12-05 10:10:51.2892932 +0000 UTC m=+0.177162363 container attach ae88e28e0fc1271d2a8c7c14d668e167e18dbdff5ef26458e8b1a2a204c1060f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 05 10:10:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:51 compute-0 ceph-mon[74418]: pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:52 compute-0 lvm[260343]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:10:52 compute-0 lvm[260343]: VG ceph_vg0 finished
Dec 05 10:10:52 compute-0 hungry_feistel[260268]: {}
Dec 05 10:10:52 compute-0 systemd[1]: libpod-ae88e28e0fc1271d2a8c7c14d668e167e18dbdff5ef26458e8b1a2a204c1060f.scope: Deactivated successfully.
Dec 05 10:10:52 compute-0 podman[260252]: 2025-12-05 10:10:52.092451783 +0000 UTC m=+0.980320886 container died ae88e28e0fc1271d2a8c7c14d668e167e18dbdff5ef26458e8b1a2a204c1060f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:10:52 compute-0 systemd[1]: libpod-ae88e28e0fc1271d2a8c7c14d668e167e18dbdff5ef26458e8b1a2a204c1060f.scope: Consumed 1.168s CPU time.
Dec 05 10:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-26c442decfc38bda4a985281a3880704f7a576c8b1e9485663560fedac197065-merged.mount: Deactivated successfully.
Dec 05 10:10:52 compute-0 podman[260252]: 2025-12-05 10:10:52.137683824 +0000 UTC m=+1.025552917 container remove ae88e28e0fc1271d2a8c7c14d668e167e18dbdff5ef26458e8b1a2a204c1060f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:10:52 compute-0 systemd[1]: libpod-conmon-ae88e28e0fc1271d2a8c7c14d668e167e18dbdff5ef26458e8b1a2a204c1060f.scope: Deactivated successfully.
Dec 05 10:10:52 compute-0 sudo[260115]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:10:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:10:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:10:52 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:10:52 compute-0 sudo[260360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:10:52 compute-0 sudo[260360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:10:52 compute-0 sudo[260360]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:10:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:52.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:10:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:52.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:53 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:10:53 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:10:53 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1139465801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:10:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:53 compute-0 nova_compute[257087]: 2025-12-05 10:10:53.531 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:53.609Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:10:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:53.610Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:10:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:53.611Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:10:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3242117985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:10:54 compute-0 ceph-mon[74418]: pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:54 compute-0 nova_compute[257087]: 2025-12-05 10:10:54.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:54 compute-0 nova_compute[257087]: 2025-12-05 10:10:54.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:54 compute-0 nova_compute[257087]: 2025-12-05 10:10:54.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:10:54 compute-0 nova_compute[257087]: 2025-12-05 10:10:54.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:10:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:54 compute-0 nova_compute[257087]: 2025-12-05 10:10:54.901 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:10:54 compute-0 nova_compute[257087]: 2025-12-05 10:10:54.902 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:54 compute-0 nova_compute[257087]: 2025-12-05 10:10:54.903 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:54 compute-0 nova_compute[257087]: 2025-12-05 10:10:54.903 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:54 compute-0 nova_compute[257087]: 2025-12-05 10:10:54.904 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:10:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:54.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:54.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3670917532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:10:55 compute-0 nova_compute[257087]: 2025-12-05 10:10:55.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:55 compute-0 nova_compute[257087]: 2025-12-05 10:10:55.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:55 compute-0 nova_compute[257087]: 2025-12-05 10:10:55.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:55 compute-0 nova_compute[257087]: 2025-12-05 10:10:55.553 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:10:55 compute-0 nova_compute[257087]: 2025-12-05 10:10:55.554 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:10:55 compute-0 nova_compute[257087]: 2025-12-05 10:10:55.554 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:10:55 compute-0 nova_compute[257087]: 2025-12-05 10:10:55.554 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:10:55 compute-0 nova_compute[257087]: 2025-12-05 10:10:55.555 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:10:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:55] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:10:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:10:55] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec 05 10:10:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:10:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2060796262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.024 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.207 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.209 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4868MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.210 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.210 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.285 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.285 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.306 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:10:56 compute-0 ceph-mon[74418]: pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2060796262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:10:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3363863356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:10:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:10:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259834308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.805 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.813 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.897 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.899 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:10:56 compute-0 nova_compute[257087]: 2025-12-05 10:10:56.899 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:10:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:56.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:56.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:10:57.318Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:10:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:10:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:10:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4259834308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:10:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2377079511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:10:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2377079511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:10:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:10:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:10:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:10:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:10:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:10:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:10:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:10:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:58 compute-0 ceph-mon[74418]: pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:10:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:10:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:10:58.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:10:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:10:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:10:58.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:10:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=cleanup t=2025-12-05T10:10:59.220207395Z level=info msg="Completed cleanup jobs" duration=35.705512ms
Dec 05 10:10:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=plugins.update.checker t=2025-12-05T10:10:59.32436894Z level=info msg="Update check succeeded" duration=57.454764ms
Dec 05 10:10:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=grafana.update.checker t=2025-12-05T10:10:59.3250469Z level=info msg="Update check succeeded" duration=49.106297ms
Dec 05 10:10:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:10:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:10:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:10:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:00 compute-0 ceph-mon[74418]: pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 05 10:11:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:11:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:00.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:11:00 compute-0 rsyslogd[1004]: imjournal from <np0005546606:radosgw>: begin to drop messages due to rate-limiting
Dec 05 10:11:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:00.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:02 compute-0 ceph-mon[74418]: pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:11:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:02.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:11:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:02.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:03 compute-0 podman[260442]: 2025-12-05 10:11:03.460138619 +0000 UTC m=+0.111738682 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 10:11:03 compute-0 podman[260443]: 2025-12-05 10:11:03.461036423 +0000 UTC m=+0.113333855 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:11:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:03.612Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:11:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:04 compute-0 ceph-mon[74418]: pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:11:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:04.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:11:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:04.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:05] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:11:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:05] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec 05 10:11:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:06.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:06.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:06 compute-0 ceph-mon[74418]: pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:07.320Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:11:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:07 compute-0 podman[260485]: 2025-12-05 10:11:07.511997823 +0000 UTC m=+0.159251266 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 10:11:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:07.954124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929467954294, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2083, "num_deletes": 250, "total_data_size": 4127196, "memory_usage": 4192264, "flush_reason": "Manual Compaction"}
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929467974598, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 2353270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20132, "largest_seqno": 22214, "table_properties": {"data_size": 2346556, "index_size": 3464, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17628, "raw_average_key_size": 20, "raw_value_size": 2331517, "raw_average_value_size": 2720, "num_data_blocks": 154, "num_entries": 857, "num_filter_entries": 857, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929254, "oldest_key_time": 1764929254, "file_creation_time": 1764929467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 20586 microseconds, and 7733 cpu microseconds.
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:07.974723) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 2353270 bytes OK
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:07.974781) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:07.976984) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:07.976998) EVENT_LOG_v1 {"time_micros": 1764929467976993, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:07.977016) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4118653, prev total WAL file size 4118653, number of live WAL files 2.
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:07.978694) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(2298KB)], [44(13MB)]
Dec 05 10:11:07 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929467978828, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 16019574, "oldest_snapshot_seqno": -1}
Dec 05 10:11:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5557 keys, 13530231 bytes, temperature: kUnknown
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929468258182, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 13530231, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13493145, "index_size": 22064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 140182, "raw_average_key_size": 25, "raw_value_size": 13392524, "raw_average_value_size": 2410, "num_data_blocks": 906, "num_entries": 5557, "num_filter_entries": 5557, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764929467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:11:08 compute-0 ceph-mon[74418]: pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:08.258762) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 13530231 bytes
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:08.262116) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 57.3 rd, 48.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 13.0 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(12.6) write-amplify(5.7) OK, records in: 5970, records dropped: 413 output_compression: NoCompression
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:08.262162) EVENT_LOG_v1 {"time_micros": 1764929468262143, "job": 22, "event": "compaction_finished", "compaction_time_micros": 279526, "compaction_time_cpu_micros": 57891, "output_level": 6, "num_output_files": 1, "total_output_size": 13530231, "num_input_records": 5970, "num_output_records": 5557, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929468263192, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929468266557, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:07.978508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:08.266648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:08.266654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:08.266655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:08.266656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:08.266658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101108 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:11:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:08.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:11:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:10 compute-0 ceph-mon[74418]: pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.543708) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470543747, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 284, "num_deletes": 251, "total_data_size": 83374, "memory_usage": 90120, "flush_reason": "Manual Compaction"}
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470547028, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 82764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22215, "largest_seqno": 22498, "table_properties": {"data_size": 80853, "index_size": 139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4919, "raw_average_key_size": 18, "raw_value_size": 77151, "raw_average_value_size": 286, "num_data_blocks": 6, "num_entries": 269, "num_filter_entries": 269, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929467, "oldest_key_time": 1764929467, "file_creation_time": 1764929470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 3352 microseconds, and 1122 cpu microseconds.
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.547063) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 82764 bytes OK
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.547077) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.548310) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.548328) EVENT_LOG_v1 {"time_micros": 1764929470548322, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.548340) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 81255, prev total WAL file size 81255, number of live WAL files 2.
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.548709) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(80KB)], [47(12MB)]
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470549013, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 13612995, "oldest_snapshot_seqno": -1}
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5317 keys, 11371131 bytes, temperature: kUnknown
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470649450, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 11371131, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11337467, "index_size": 19282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 135882, "raw_average_key_size": 25, "raw_value_size": 11242775, "raw_average_value_size": 2114, "num_data_blocks": 784, "num_entries": 5317, "num_filter_entries": 5317, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764929470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:11:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.649772) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 11371131 bytes
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.692804) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.5 rd, 113.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(301.9) write-amplify(137.4) OK, records in: 5826, records dropped: 509 output_compression: NoCompression
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.692866) EVENT_LOG_v1 {"time_micros": 1764929470692840, "job": 24, "event": "compaction_finished", "compaction_time_micros": 100456, "compaction_time_cpu_micros": 47100, "output_level": 6, "num_output_files": 1, "total_output_size": 11371131, "num_input_records": 5826, "num_output_records": 5317, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470693362, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470699505, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.548589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.699594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.699601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.699604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.699606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:10 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:11:10.699609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:10.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:10.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:11 compute-0 sudo[260515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:11:11 compute-0 sudo[260515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:11:11 compute-0 sudo[260515]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:11:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:11:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:11:12 compute-0 ceph-mon[74418]: pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:11:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:12.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:12.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:11:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:13.613Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:11:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:13.614Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:11:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:11:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:14.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:15.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:15 compute-0 ceph-mon[74418]: pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:11:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:11:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:15] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:11:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:15] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:11:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:11:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:16.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:11:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:17.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:11:17 compute-0 ceph-mon[74418]: pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:11:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:17.321Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:11:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:11:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:18 compute-0 ceph-mon[74418]: pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:11:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:18.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:19.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:11:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:11:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:11:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:11:20.568 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:11:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:11:20.569 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:11:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:11:20.569 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:11:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:21.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:21.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Dec 05 10:11:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:21 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:21 compute-0 ceph-mon[74418]: pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:11:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:22 compute-0 ceph-mon[74418]: pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Dec 05 10:11:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:23.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:23.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:11:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:11:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:23.615Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:11:23 compute-0 ceph-mon[74418]: pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 05 10:11:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:25.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:25.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:11:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:25] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:11:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:25] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec 05 10:11:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:26 compute-0 ceph-mon[74418]: pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:11:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:27.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:27.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:27.322Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:11:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:11:27
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.mgr', 'volumes', '.nfs', 'cephfs.cephfs.data', 'images', '.rgw.root', 'backups', 'default.rgw.log']
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:11:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:11:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:11:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:11:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:11:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101128 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:11:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc008b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:28 compute-0 ceph-mon[74418]: pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:11:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:29.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:29.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:11:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:30 compute-0 ceph-mon[74418]: pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 05 10:11:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:31.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:31.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:31 compute-0 sudo[260563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:11:31 compute-0 sudo[260563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:11:31 compute-0 sudo[260563]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:11:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc008b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:33.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:33.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:33 compute-0 ceph-mon[74418]: pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:11:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:11:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:33.617Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:11:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc008b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:34 compute-0 ceph-mon[74418]: pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:11:34 compute-0 podman[260591]: 2025-12-05 10:11:34.402848065 +0000 UTC m=+0.069410031 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:11:34 compute-0 podman[260592]: 2025-12-05 10:11:34.422903571 +0000 UTC m=+0.068574617 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 05 10:11:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:35.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:35.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:11:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:35] "GET /metrics HTTP/1.1" 200 48436 "" "Prometheus/2.51.0"
Dec 05 10:11:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:35] "GET /metrics HTTP/1.1" 200 48436 "" "Prometheus/2.51.0"
Dec 05 10:11:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:36 compute-0 ceph-mon[74418]: pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 05 10:11:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:37.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:37.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:37.324Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:11:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:11:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:38 compute-0 podman[260635]: 2025-12-05 10:11:38.418203366 +0000 UTC m=+0.089045235 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 05 10:11:38 compute-0 ceph-mon[74418]: pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:11:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:39.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:39.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:11:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:40 compute-0 ceph-mon[74418]: pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:11:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:41.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:41.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:11:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:11:42 compute-0 ceph-mon[74418]: pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:11:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:11:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:43.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:11:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:43.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:43.618Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:11:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:44 compute-0 ceph-mon[74418]: pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:45.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:45.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:11:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:45] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:11:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:45] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:11:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:46 compute-0 ceph-mon[74418]: pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:11:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:47.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:47.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:47.325Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:11:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:47.326Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:11:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:47.326Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:11:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:48 compute-0 ceph-mon[74418]: pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:49.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:49.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:11:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:50 compute-0 ceph-mon[74418]: pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:11:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:51.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:51.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:51 compute-0 sudo[260675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:11:51 compute-0 sudo[260675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:11:51 compute-0 sudo[260675]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00023e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:52 compute-0 sudo[260702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:11:52 compute-0 sudo[260702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:11:52 compute-0 sudo[260702]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:52 compute-0 sudo[260727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:11:52 compute-0 sudo[260727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:11:52 compute-0 ceph-mon[74418]: pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:11:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:11:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:11:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:53.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:11:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:53 compute-0 sudo[260727]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 05 10:11:53 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 10:11:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:53.619Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:11:53 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 10:11:53 compute-0 nova_compute[257087]: 2025-12-05 10:11:53.895 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:54 compute-0 ceph-mon[74418]: pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1706594145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:11:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 10:11:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 10:11:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 10:11:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 10:11:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:55.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:11:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:55.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:11:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:11:55 compute-0 nova_compute[257087]: 2025-12-05 10:11:55.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:55 compute-0 nova_compute[257087]: 2025-12-05 10:11:55.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:55 compute-0 nova_compute[257087]: 2025-12-05 10:11:55.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:55 compute-0 nova_compute[257087]: 2025-12-05 10:11:55.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:55 compute-0 nova_compute[257087]: 2025-12-05 10:11:55.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:55 compute-0 nova_compute[257087]: 2025-12-05 10:11:55.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 05 10:11:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 05 10:11:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:11:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:11:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:11:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:11:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:11:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:11:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:11:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:55] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:11:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:11:55] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec 05 10:11:55 compute-0 sudo[260785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:11:55 compute-0 sudo[260785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:11:55 compute-0 sudo[260785]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:55 compute-0 sudo[260810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:11:55 compute-0 sudo[260810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1572099436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/718784540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:11:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:11:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:56 compute-0 podman[260877]: 2025-12-05 10:11:56.183078656 +0000 UTC m=+0.054963328 container create f602e1be2f6569120c7aa76b5d14ed0b85309015dff26df2e5ff0ae173edf9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_heisenberg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:11:56 compute-0 systemd[1]: Started libpod-conmon-f602e1be2f6569120c7aa76b5d14ed0b85309015dff26df2e5ff0ae173edf9cc.scope.
Dec 05 10:11:56 compute-0 podman[260877]: 2025-12-05 10:11:56.156519473 +0000 UTC m=+0.028404195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:11:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:11:56 compute-0 podman[260877]: 2025-12-05 10:11:56.290576781 +0000 UTC m=+0.162461503 container init f602e1be2f6569120c7aa76b5d14ed0b85309015dff26df2e5ff0ae173edf9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_heisenberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:11:56 compute-0 podman[260877]: 2025-12-05 10:11:56.301430937 +0000 UTC m=+0.173315619 container start f602e1be2f6569120c7aa76b5d14ed0b85309015dff26df2e5ff0ae173edf9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_heisenberg, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:11:56 compute-0 podman[260877]: 2025-12-05 10:11:56.306638409 +0000 UTC m=+0.178523161 container attach f602e1be2f6569120c7aa76b5d14ed0b85309015dff26df2e5ff0ae173edf9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_heisenberg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:11:56 compute-0 affectionate_heisenberg[260894]: 167 167
Dec 05 10:11:56 compute-0 systemd[1]: libpod-f602e1be2f6569120c7aa76b5d14ed0b85309015dff26df2e5ff0ae173edf9cc.scope: Deactivated successfully.
Dec 05 10:11:56 compute-0 podman[260877]: 2025-12-05 10:11:56.31038189 +0000 UTC m=+0.182266572 container died f602e1be2f6569120c7aa76b5d14ed0b85309015dff26df2e5ff0ae173edf9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-54dde184b7a933393eeb1621299258b078f1b0b530841a091e1532f7bfd4dc50-merged.mount: Deactivated successfully.
Dec 05 10:11:56 compute-0 nova_compute[257087]: 2025-12-05 10:11:56.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:56 compute-0 nova_compute[257087]: 2025-12-05 10:11:56.530 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:11:56 compute-0 nova_compute[257087]: 2025-12-05 10:11:56.531 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:11:56 compute-0 nova_compute[257087]: 2025-12-05 10:11:56.560 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:11:56 compute-0 nova_compute[257087]: 2025-12-05 10:11:56.560 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:56 compute-0 nova_compute[257087]: 2025-12-05 10:11:56.560 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:11:56 compute-0 podman[260877]: 2025-12-05 10:11:56.690225301 +0000 UTC m=+0.562110013 container remove f602e1be2f6569120c7aa76b5d14ed0b85309015dff26df2e5ff0ae173edf9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:11:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:56 compute-0 systemd[1]: libpod-conmon-f602e1be2f6569120c7aa76b5d14ed0b85309015dff26df2e5ff0ae173edf9cc.scope: Deactivated successfully.
Dec 05 10:11:56 compute-0 podman[260919]: 2025-12-05 10:11:56.904044211 +0000 UTC m=+0.036371821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:11:57 compute-0 podman[260919]: 2025-12-05 10:11:57.006399988 +0000 UTC m=+0.138727578 container create 9aae6635a9699f71d03d4c5cffd7f833a17d10be73840021fcd9dea42fe1f030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:11:57 compute-0 ceph-mon[74418]: pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:11:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3253059115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:11:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:57.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:57.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:11:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/443590229' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:11:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:11:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/443590229' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:11:57 compute-0 systemd[1]: Started libpod-conmon-9aae6635a9699f71d03d4c5cffd7f833a17d10be73840021fcd9dea42fe1f030.scope.
Dec 05 10:11:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bbaf2f1cd1c207be67609cde32c158f7cb64b243cfe9e880303ed8425c891a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bbaf2f1cd1c207be67609cde32c158f7cb64b243cfe9e880303ed8425c891a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bbaf2f1cd1c207be67609cde32c158f7cb64b243cfe9e880303ed8425c891a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bbaf2f1cd1c207be67609cde32c158f7cb64b243cfe9e880303ed8425c891a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bbaf2f1cd1c207be67609cde32c158f7cb64b243cfe9e880303ed8425c891a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:57 compute-0 podman[260919]: 2025-12-05 10:11:57.133699962 +0000 UTC m=+0.266027642 container init 9aae6635a9699f71d03d4c5cffd7f833a17d10be73840021fcd9dea42fe1f030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:11:57 compute-0 podman[260919]: 2025-12-05 10:11:57.142519383 +0000 UTC m=+0.274847013 container start 9aae6635a9699f71d03d4c5cffd7f833a17d10be73840021fcd9dea42fe1f030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:11:57 compute-0 podman[260919]: 2025-12-05 10:11:57.146762468 +0000 UTC m=+0.279090158 container attach 9aae6635a9699f71d03d4c5cffd7f833a17d10be73840021fcd9dea42fe1f030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Dec 05 10:11:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:11:57.328Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:11:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:57 compute-0 romantic_maxwell[260935]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:11:57 compute-0 romantic_maxwell[260935]: --> All data devices are unavailable
Dec 05 10:11:57 compute-0 nova_compute[257087]: 2025-12-05 10:11:57.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:57 compute-0 systemd[1]: libpod-9aae6635a9699f71d03d4c5cffd7f833a17d10be73840021fcd9dea42fe1f030.scope: Deactivated successfully.
Dec 05 10:11:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:11:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:11:57 compute-0 podman[260950]: 2025-12-05 10:11:57.602774271 +0000 UTC m=+0.038855379 container died 9aae6635a9699f71d03d4c5cffd7f833a17d10be73840021fcd9dea42fe1f030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:11:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-20bbaf2f1cd1c207be67609cde32c158f7cb64b243cfe9e880303ed8425c891a-merged.mount: Deactivated successfully.
Dec 05 10:11:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:11:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:11:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:11:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:11:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:11:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:11:57 compute-0 podman[260950]: 2025-12-05 10:11:57.66593672 +0000 UTC m=+0.102017738 container remove 9aae6635a9699f71d03d4c5cffd7f833a17d10be73840021fcd9dea42fe1f030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 05 10:11:57 compute-0 systemd[1]: libpod-conmon-9aae6635a9699f71d03d4c5cffd7f833a17d10be73840021fcd9dea42fe1f030.scope: Deactivated successfully.
Dec 05 10:11:57 compute-0 sudo[260810]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:57 compute-0 nova_compute[257087]: 2025-12-05 10:11:57.721 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:11:57 compute-0 nova_compute[257087]: 2025-12-05 10:11:57.723 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:11:57 compute-0 nova_compute[257087]: 2025-12-05 10:11:57.724 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:11:57 compute-0 nova_compute[257087]: 2025-12-05 10:11:57.724 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:11:57 compute-0 nova_compute[257087]: 2025-12-05 10:11:57.725 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:11:57 compute-0 sudo[260966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:11:57 compute-0 sudo[260966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:11:57 compute-0 sudo[260966]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:57 compute-0 sudo[260992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:11:57 compute-0 sudo[260992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:11:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/443590229' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:11:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/443590229' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:11:58 compute-0 ceph-mon[74418]: pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:11:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:11:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:11:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:11:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/720279831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.202 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:11:58 compute-0 podman[261078]: 2025-12-05 10:11:58.344929503 +0000 UTC m=+0.072045582 container create e318028ffb489f05e7efa0f66c2799dce875e395c2bd374b871994dca4b0690a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_yonath, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 10:11:58 compute-0 systemd[1]: Started libpod-conmon-e318028ffb489f05e7efa0f66c2799dce875e395c2bd374b871994dca4b0690a.scope.
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.396 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.399 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4878MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.400 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.400 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:11:58 compute-0 podman[261078]: 2025-12-05 10:11:58.319032169 +0000 UTC m=+0.046148358 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:11:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:11:58 compute-0 podman[261078]: 2025-12-05 10:11:58.456821809 +0000 UTC m=+0.183937968 container init e318028ffb489f05e7efa0f66c2799dce875e395c2bd374b871994dca4b0690a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.456 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.457 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:11:58 compute-0 podman[261078]: 2025-12-05 10:11:58.466874422 +0000 UTC m=+0.193990501 container start e318028ffb489f05e7efa0f66c2799dce875e395c2bd374b871994dca4b0690a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 10:11:58 compute-0 podman[261078]: 2025-12-05 10:11:58.47083252 +0000 UTC m=+0.197948649 container attach e318028ffb489f05e7efa0f66c2799dce875e395c2bd374b871994dca4b0690a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.474 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:11:58 compute-0 sad_yonath[261095]: 167 167
Dec 05 10:11:58 compute-0 systemd[1]: libpod-e318028ffb489f05e7efa0f66c2799dce875e395c2bd374b871994dca4b0690a.scope: Deactivated successfully.
Dec 05 10:11:58 compute-0 podman[261078]: 2025-12-05 10:11:58.476886785 +0000 UTC m=+0.204002904 container died e318028ffb489f05e7efa0f66c2799dce875e395c2bd374b871994dca4b0690a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:11:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-778e58e18ba8e5526a0624c66ef51b113cf166a2aac992b8779ea69e4db5b7bc-merged.mount: Deactivated successfully.
Dec 05 10:11:58 compute-0 podman[261078]: 2025-12-05 10:11:58.519846844 +0000 UTC m=+0.246962913 container remove e318028ffb489f05e7efa0f66c2799dce875e395c2bd374b871994dca4b0690a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 10:11:58 compute-0 systemd[1]: libpod-conmon-e318028ffb489f05e7efa0f66c2799dce875e395c2bd374b871994dca4b0690a.scope: Deactivated successfully.
Dec 05 10:11:58 compute-0 podman[261138]: 2025-12-05 10:11:58.696647587 +0000 UTC m=+0.051929134 container create 67f01255f36e81c8faee74c9526bc01e5eae713ee88dae3b9f283244b4139126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:11:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:58 compute-0 systemd[1]: Started libpod-conmon-67f01255f36e81c8faee74c9526bc01e5eae713ee88dae3b9f283244b4139126.scope.
Dec 05 10:11:58 compute-0 podman[261138]: 2025-12-05 10:11:58.676070177 +0000 UTC m=+0.031351744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:11:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e3c69a8d5d84681ef37a23dd043f18ed5e1cbc9c6d46b09c0bb013c1f836cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e3c69a8d5d84681ef37a23dd043f18ed5e1cbc9c6d46b09c0bb013c1f836cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e3c69a8d5d84681ef37a23dd043f18ed5e1cbc9c6d46b09c0bb013c1f836cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e3c69a8d5d84681ef37a23dd043f18ed5e1cbc9c6d46b09c0bb013c1f836cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:58 compute-0 podman[261138]: 2025-12-05 10:11:58.817894338 +0000 UTC m=+0.173175885 container init 67f01255f36e81c8faee74c9526bc01e5eae713ee88dae3b9f283244b4139126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 10:11:58 compute-0 podman[261138]: 2025-12-05 10:11:58.824284321 +0000 UTC m=+0.179565878 container start 67f01255f36e81c8faee74c9526bc01e5eae713ee88dae3b9f283244b4139126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:11:58 compute-0 podman[261138]: 2025-12-05 10:11:58.827777656 +0000 UTC m=+0.183059223 container attach 67f01255f36e81c8faee74c9526bc01e5eae713ee88dae3b9f283244b4139126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_spence, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:11:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:11:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1584929375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.941 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.948 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.970 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.971 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:11:58 compute-0 nova_compute[257087]: 2025-12-05 10:11:58.971 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:11:59 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/720279831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:11:59 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1584929375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:11:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:11:59.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:11:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:11:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:11:59.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:11:59 compute-0 brave_spence[261154]: {
Dec 05 10:11:59 compute-0 brave_spence[261154]:     "1": [
Dec 05 10:11:59 compute-0 brave_spence[261154]:         {
Dec 05 10:11:59 compute-0 brave_spence[261154]:             "devices": [
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "/dev/loop3"
Dec 05 10:11:59 compute-0 brave_spence[261154]:             ],
Dec 05 10:11:59 compute-0 brave_spence[261154]:             "lv_name": "ceph_lv0",
Dec 05 10:11:59 compute-0 brave_spence[261154]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:11:59 compute-0 brave_spence[261154]:             "lv_size": "21470642176",
Dec 05 10:11:59 compute-0 brave_spence[261154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:11:59 compute-0 brave_spence[261154]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:11:59 compute-0 brave_spence[261154]:             "name": "ceph_lv0",
Dec 05 10:11:59 compute-0 brave_spence[261154]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:11:59 compute-0 brave_spence[261154]:             "tags": {
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.cluster_name": "ceph",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.crush_device_class": "",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.encrypted": "0",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.osd_id": "1",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.type": "block",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.vdo": "0",
Dec 05 10:11:59 compute-0 brave_spence[261154]:                 "ceph.with_tpm": "0"
Dec 05 10:11:59 compute-0 brave_spence[261154]:             },
Dec 05 10:11:59 compute-0 brave_spence[261154]:             "type": "block",
Dec 05 10:11:59 compute-0 brave_spence[261154]:             "vg_name": "ceph_vg0"
Dec 05 10:11:59 compute-0 brave_spence[261154]:         }
Dec 05 10:11:59 compute-0 brave_spence[261154]:     ]
Dec 05 10:11:59 compute-0 brave_spence[261154]: }
Dec 05 10:11:59 compute-0 systemd[1]: libpod-67f01255f36e81c8faee74c9526bc01e5eae713ee88dae3b9f283244b4139126.scope: Deactivated successfully.
Dec 05 10:11:59 compute-0 podman[261138]: 2025-12-05 10:11:59.148871817 +0000 UTC m=+0.504153364 container died 67f01255f36e81c8faee74c9526bc01e5eae713ee88dae3b9f283244b4139126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_spence, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:11:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3e3c69a8d5d84681ef37a23dd043f18ed5e1cbc9c6d46b09c0bb013c1f836cb-merged.mount: Deactivated successfully.
Dec 05 10:11:59 compute-0 podman[261138]: 2025-12-05 10:11:59.198180169 +0000 UTC m=+0.553461726 container remove 67f01255f36e81c8faee74c9526bc01e5eae713ee88dae3b9f283244b4139126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:11:59 compute-0 systemd[1]: libpod-conmon-67f01255f36e81c8faee74c9526bc01e5eae713ee88dae3b9f283244b4139126.scope: Deactivated successfully.
Dec 05 10:11:59 compute-0 sudo[260992]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:59 compute-0 sudo[261177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:11:59 compute-0 sudo[261177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:11:59 compute-0 sudo[261177]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:59 compute-0 sudo[261202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:11:59 compute-0 sudo[261202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:11:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:11:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:11:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:11:59 compute-0 podman[261269]: 2025-12-05 10:11:59.925000774 +0000 UTC m=+0.068230539 container create 3c9fe03598512504f7aeb8442423b56dff3d1c72580b9d2cadddac1081687974 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sammet, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 10:11:59 compute-0 systemd[1]: Started libpod-conmon-3c9fe03598512504f7aeb8442423b56dff3d1c72580b9d2cadddac1081687974.scope.
Dec 05 10:11:59 compute-0 podman[261269]: 2025-12-05 10:11:59.899655204 +0000 UTC m=+0.042885049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:12:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:12:00 compute-0 podman[261269]: 2025-12-05 10:12:00.033700843 +0000 UTC m=+0.176930628 container init 3c9fe03598512504f7aeb8442423b56dff3d1c72580b9d2cadddac1081687974 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:12:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:00 compute-0 podman[261269]: 2025-12-05 10:12:00.048878455 +0000 UTC m=+0.192108220 container start 3c9fe03598512504f7aeb8442423b56dff3d1c72580b9d2cadddac1081687974 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 10:12:00 compute-0 podman[261269]: 2025-12-05 10:12:00.052949146 +0000 UTC m=+0.196178921 container attach 3c9fe03598512504f7aeb8442423b56dff3d1c72580b9d2cadddac1081687974 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sammet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 10:12:00 compute-0 ceph-mon[74418]: pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:12:00 compute-0 loving_sammet[261286]: 167 167
Dec 05 10:12:00 compute-0 systemd[1]: libpod-3c9fe03598512504f7aeb8442423b56dff3d1c72580b9d2cadddac1081687974.scope: Deactivated successfully.
Dec 05 10:12:00 compute-0 podman[261269]: 2025-12-05 10:12:00.057963342 +0000 UTC m=+0.201193117 container died 3c9fe03598512504f7aeb8442423b56dff3d1c72580b9d2cadddac1081687974 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sammet, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:12:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-56f56441c11d34e571d14557064e271da86f7cfe5f633c4718fe58a397c1c8ed-merged.mount: Deactivated successfully.
Dec 05 10:12:00 compute-0 podman[261269]: 2025-12-05 10:12:00.090503109 +0000 UTC m=+0.233732874 container remove 3c9fe03598512504f7aeb8442423b56dff3d1c72580b9d2cadddac1081687974 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sammet, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 10:12:00 compute-0 systemd[1]: libpod-conmon-3c9fe03598512504f7aeb8442423b56dff3d1c72580b9d2cadddac1081687974.scope: Deactivated successfully.
Dec 05 10:12:00 compute-0 podman[261310]: 2025-12-05 10:12:00.294557333 +0000 UTC m=+0.054997978 container create cc19b64daa1badb49049d8c6f9ec6534773090bc48f3fdc9581d1c7fac021e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:12:00 compute-0 systemd[1]: Started libpod-conmon-cc19b64daa1badb49049d8c6f9ec6534773090bc48f3fdc9581d1c7fac021e89.scope.
Dec 05 10:12:00 compute-0 podman[261310]: 2025-12-05 10:12:00.267111636 +0000 UTC m=+0.027552291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:12:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/438d017f2ff5cb662d6c4ee3166ccc7b56a91b27d0ac2858cfcf657f8c09a34b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/438d017f2ff5cb662d6c4ee3166ccc7b56a91b27d0ac2858cfcf657f8c09a34b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/438d017f2ff5cb662d6c4ee3166ccc7b56a91b27d0ac2858cfcf657f8c09a34b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/438d017f2ff5cb662d6c4ee3166ccc7b56a91b27d0ac2858cfcf657f8c09a34b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:12:00 compute-0 podman[261310]: 2025-12-05 10:12:00.380730198 +0000 UTC m=+0.141170883 container init cc19b64daa1badb49049d8c6f9ec6534773090bc48f3fdc9581d1c7fac021e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cerf, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:12:00 compute-0 podman[261310]: 2025-12-05 10:12:00.394611966 +0000 UTC m=+0.155052611 container start cc19b64daa1badb49049d8c6f9ec6534773090bc48f3fdc9581d1c7fac021e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:12:00 compute-0 podman[261310]: 2025-12-05 10:12:00.398172844 +0000 UTC m=+0.158613449 container attach cc19b64daa1badb49049d8c6f9ec6534773090bc48f3fdc9581d1c7fac021e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cerf, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 05 10:12:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:01.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:01.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:01 compute-0 lvm[261402]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:12:01 compute-0 lvm[261402]: VG ceph_vg0 finished
Dec 05 10:12:01 compute-0 festive_cerf[261326]: {}
Dec 05 10:12:01 compute-0 systemd[1]: libpod-cc19b64daa1badb49049d8c6f9ec6534773090bc48f3fdc9581d1c7fac021e89.scope: Deactivated successfully.
Dec 05 10:12:01 compute-0 podman[261310]: 2025-12-05 10:12:01.188664441 +0000 UTC m=+0.949105056 container died cc19b64daa1badb49049d8c6f9ec6534773090bc48f3fdc9581d1c7fac021e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cerf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:12:01 compute-0 systemd[1]: libpod-cc19b64daa1badb49049d8c6f9ec6534773090bc48f3fdc9581d1c7fac021e89.scope: Consumed 1.348s CPU time.
Dec 05 10:12:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-438d017f2ff5cb662d6c4ee3166ccc7b56a91b27d0ac2858cfcf657f8c09a34b-merged.mount: Deactivated successfully.
Dec 05 10:12:01 compute-0 podman[261310]: 2025-12-05 10:12:01.235180507 +0000 UTC m=+0.995621132 container remove cc19b64daa1badb49049d8c6f9ec6534773090bc48f3fdc9581d1c7fac021e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:12:01 compute-0 systemd[1]: libpod-conmon-cc19b64daa1badb49049d8c6f9ec6534773090bc48f3fdc9581d1c7fac021e89.scope: Deactivated successfully.
Dec 05 10:12:01 compute-0 sudo[261202]: pam_unix(sudo:session): session closed for user root
Dec 05 10:12:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:12:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:12:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:12:01 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:12:01 compute-0 sudo[261415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:12:01 compute-0 sudo[261415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:12:01 compute-0 sudo[261415]: pam_unix(sudo:session): session closed for user root
Dec 05 10:12:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:02 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:12:02 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:12:02 compute-0 ceph-mon[74418]: pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:03.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:03.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:03.620Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:12:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:04 compute-0 ceph-mon[74418]: pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:05.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:12:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:05.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:12:05 compute-0 podman[261444]: 2025-12-05 10:12:05.453711519 +0000 UTC m=+0.115899828 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:12:05 compute-0 podman[261445]: 2025-12-05 10:12:05.460185675 +0000 UTC m=+0.120093131 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:12:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:12:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:05] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:12:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:05] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:12:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:06 compute-0 ceph-mon[74418]: pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:12:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:07.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:07.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:07.328Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:12:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:08 compute-0 ceph-mon[74418]: pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:09.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:09.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:09 compute-0 podman[261488]: 2025-12-05 10:12:09.487046987 +0000 UTC m=+0.135991832 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec 05 10:12:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:12:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:10 compute-0 ceph-mon[74418]: pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:12:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:11.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:11.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:11 compute-0 sudo[261517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:12:11 compute-0 sudo[261517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:12:11 compute-0 sudo[261517]: pam_unix(sudo:session): session closed for user root
Dec 05 10:12:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:12:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:12:12 compute-0 ceph-mon[74418]: pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:12:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00024a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:13.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:13.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4001fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:13.621Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:12:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:14 compute-0 ceph-mon[74418]: pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:15.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:15.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=404 latency=0.003000081s ======
Dec 05 10:12:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:15.298 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.003000081s
Dec 05 10:12:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:12:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - - [05/Dec/2025:10:12:15.319 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000026s
Dec 05 10:12:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:12:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00024c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:15] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:12:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:15] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:12:16 compute-0 ceph-mon[74418]: pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:12:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:17.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:17.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:17.329Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:12:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd00024e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:18 compute-0 ceph-mon[74418]: pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:12:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:19.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:19.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:12:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec 05 10:12:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:12:20.570 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:12:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:12:20.571 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:12:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:12:20.572 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:12:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec 05 10:12:20 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec 05 10:12:20 compute-0 ceph-mon[74418]: pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:12:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:21.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:21.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Dec 05 10:12:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:21 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec 05 10:12:21 compute-0 ceph-mon[74418]: osdmap e132: 3 total, 3 up, 3 in
Dec 05 10:12:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec 05 10:12:21 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec 05 10:12:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec 05 10:12:22 compute-0 ceph-mon[74418]: pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Dec 05 10:12:22 compute-0 ceph-mon[74418]: osdmap e133: 3 total, 3 up, 3 in
Dec 05 10:12:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec 05 10:12:22 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec 05 10:12:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:23.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:23.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:12:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:23.623Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:12:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:23.623Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:12:23 compute-0 ceph-mon[74418]: osdmap e134: 3 total, 3 up, 3 in
Dec 05 10:12:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec 05 10:12:24 compute-0 ceph-mon[74418]: pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:12:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec 05 10:12:24 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec 05 10:12:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0043d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:25.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:25.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v694: 353 pgs: 353 active+clean; 21 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 4.2 MiB/s wr, 54 op/s
Dec 05 10:12:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:25] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:12:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:25] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec 05 10:12:25 compute-0 ceph-mon[74418]: osdmap e135: 3 total, 3 up, 3 in
Dec 05 10:12:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0005400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101226 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:12:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:26 compute-0 ceph-mon[74418]: pgmap v694: 353 pgs: 353 active+clean; 21 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 4.2 MiB/s wr, 54 op/s
Dec 05 10:12:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:27.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:27.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:27.330Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v695: 353 pgs: 353 active+clean; 21 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.4 MiB/s wr, 44 op/s
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:12:27
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'backups', '.rgw.root', '.nfs', 'cephfs.cephfs.data']
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:12:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:12:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:12:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4004d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:12:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033308812756397733 of space, bias 1.0, pg target 0.0999264382691932 quantized to 32 (current 32)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:12:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:12:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec 05 10:12:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec 05 10:12:28 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec 05 10:12:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:28 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:28 compute-0 ceph-mon[74418]: pgmap v695: 353 pgs: 353 active+clean; 21 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.4 MiB/s wr, 44 op/s
Dec 05 10:12:28 compute-0 ceph-mon[74418]: osdmap e136: 3 total, 3 up, 3 in
Dec 05 10:12:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:29.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:29.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v697: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.0 MiB/s wr, 56 op/s
Dec 05 10:12:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:29 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:30 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:30 compute-0 ceph-mon[74418]: pgmap v697: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.0 MiB/s wr, 56 op/s
Dec 05 10:12:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:31.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:31.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:31 compute-0 sudo[261564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:12:31 compute-0 sudo[261564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:12:31 compute-0 sudo[261564]: pam_unix(sudo:session): session closed for user root
Dec 05 10:12:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v698: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 05 10:12:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:31 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:32 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:32 compute-0 ceph-mon[74418]: pgmap v698: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 05 10:12:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:33.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:33.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v699: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 2.3 MiB/s wr, 13 op/s
Dec 05 10:12:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:33 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:33.624Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:12:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:33.625Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:12:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:33.626Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:12:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:34 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:34 compute-0 ceph-mon[74418]: pgmap v699: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 2.3 MiB/s wr, 13 op/s
Dec 05 10:12:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:35.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:12:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:35.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:12:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:12:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v700: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Dec 05 10:12:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:35 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:35] "GET /metrics HTTP/1.1" 200 48488 "" "Prometheus/2.51.0"
Dec 05 10:12:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:35] "GET /metrics HTTP/1.1" 200 48488 "" "Prometheus/2.51.0"
Dec 05 10:12:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:36 compute-0 podman[261594]: 2025-12-05 10:12:36.4258769 +0000 UTC m=+0.077867920 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 05 10:12:36 compute-0 podman[261595]: 2025-12-05 10:12:36.435476502 +0000 UTC m=+0.084357517 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:12:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:36 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:36 compute-0 ceph-mon[74418]: pgmap v700: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Dec 05 10:12:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:37.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:37.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:37.331Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:12:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v701: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Dec 05 10:12:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:37 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:12:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:12:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:12:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:38 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:38 compute-0 ceph-mon[74418]: pgmap v701: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Dec 05 10:12:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:39.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:39.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v702: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.8 MiB/s wr, 14 op/s
Dec 05 10:12:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:39 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:40 compute-0 podman[261634]: 2025-12-05 10:12:40.494077261 +0000 UTC m=+0.133766653 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec 05 10:12:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:40 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004680 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:40 compute-0 ceph-mon[74418]: pgmap v702: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.8 MiB/s wr, 14 op/s
Dec 05 10:12:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:41.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:41.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:12:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v703: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 937 B/s wr, 3 op/s
Dec 05 10:12:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:41 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:12:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:12:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:42 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:12:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:43.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:12:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:43.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:43 compute-0 ceph-mon[74418]: pgmap v703: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 937 B/s wr, 3 op/s
Dec 05 10:12:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:12:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v704: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:12:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:43 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0046a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:43.627Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:12:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:44 compute-0 ceph-mon[74418]: pgmap v704: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:12:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:44 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:45.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:45.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v705: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:12:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:45 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:45] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Dec 05 10:12:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:45] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Dec 05 10:12:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0046c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101246 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:12:46 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:12:46.529 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:46 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:12:46.530 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:12:46 compute-0 ceph-mon[74418]: pgmap v705: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 05 10:12:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:46 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:47.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:47.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:47.332Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:12:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v706: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:12:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:47 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:48 compute-0 ceph-mon[74418]: pgmap v706: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:12:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:48 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0046e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:49.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:49.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v707: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:12:49 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:12:49.533 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:12:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:49 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:50 compute-0 ceph-mon[74418]: pgmap v707: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 05 10:12:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:50 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:51.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:51.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v708: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:12:51 compute-0 sudo[261671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:12:51 compute-0 sudo[261671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:12:51 compute-0 sudo[261671]: pam_unix(sudo:session): session closed for user root
Dec 05 10:12:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:51 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:52 compute-0 nova_compute[257087]: 2025-12-05 10:12:52.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:52 compute-0 nova_compute[257087]: 2025-12-05 10:12:52.531 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 10:12:52 compute-0 nova_compute[257087]: 2025-12-05 10:12:52.547 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 10:12:52 compute-0 nova_compute[257087]: 2025-12-05 10:12:52.549 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:52 compute-0 nova_compute[257087]: 2025-12-05 10:12:52.549 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 10:12:52 compute-0 nova_compute[257087]: 2025-12-05 10:12:52.564 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:52 compute-0 ceph-mon[74418]: pgmap v708: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:12:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:52 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:53.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:53.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v709: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:12:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:53 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:53.628Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:12:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:53.629Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:12:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:54 compute-0 ceph-mon[74418]: pgmap v709: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:12:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:54 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:55.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:55.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v710: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:12:55 compute-0 nova_compute[257087]: 2025-12-05 10:12:55.581 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:55 compute-0 nova_compute[257087]: 2025-12-05 10:12:55.582 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:55 compute-0 nova_compute[257087]: 2025-12-05 10:12:55.582 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:55 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:55] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Dec 05 10:12:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:12:55] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Dec 05 10:12:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:56 compute-0 nova_compute[257087]: 2025-12-05 10:12:56.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:56 compute-0 nova_compute[257087]: 2025-12-05 10:12:56.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:12:56 compute-0 nova_compute[257087]: 2025-12-05 10:12:56.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:12:56 compute-0 nova_compute[257087]: 2025-12-05 10:12:56.548 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:12:56 compute-0 nova_compute[257087]: 2025-12-05 10:12:56.549 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:56 compute-0 ceph-mon[74418]: pgmap v710: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 05 10:12:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2971839290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:12:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/995306000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:12:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:56 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:12:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:57.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:12:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:12:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:57.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:12:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:12:57.333Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:12:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v711: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:12:57 compute-0 nova_compute[257087]: 2025-12-05 10:12:57.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:57 compute-0 nova_compute[257087]: 2025-12-05 10:12:57.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:12:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:12:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:57 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cdc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:12:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:12:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:12:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:12:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:12:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:12:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1011038643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:12:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/431311019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:12:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/4170156381' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:12:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/4170156381' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:12:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:12:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:12:58 compute-0 nova_compute[257087]: 2025-12-05 10:12:58.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:58 compute-0 nova_compute[257087]: 2025-12-05 10:12:58.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:12:58 compute-0 nova_compute[257087]: 2025-12-05 10:12:58.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:58 compute-0 nova_compute[257087]: 2025-12-05 10:12:58.564 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:12:58 compute-0 nova_compute[257087]: 2025-12-05 10:12:58.565 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:12:58 compute-0 nova_compute[257087]: 2025-12-05 10:12:58.565 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:12:58 compute-0 nova_compute[257087]: 2025-12-05 10:12:58.566 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:12:58 compute-0 nova_compute[257087]: 2025-12-05 10:12:58.567 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:12:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:58 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca8004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:58 compute-0 ceph-mon[74418]: pgmap v711: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:12:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:12:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514701393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.065 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:12:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:12:59.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:12:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:12:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:12:59.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.272 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.273 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4912MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.274 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.274 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.493 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.494 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:12:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v712: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.555 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing inventories for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:12:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:12:59 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.618 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Updating ProviderTree inventory for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.619 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Updating inventory in ProviderTree for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.637 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing aggregate associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.663 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing trait associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, traits: HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:12:59 compute-0 nova_compute[257087]: 2025-12-05 10:12:59.687 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:13:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3514701393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:13:00 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3377508898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:00 compute-0 nova_compute[257087]: 2025-12-05 10:13:00.220 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:13:00 compute-0 nova_compute[257087]: 2025-12-05 10:13:00.229 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:13:00 compute-0 nova_compute[257087]: 2025-12-05 10:13:00.251 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:13:00 compute-0 nova_compute[257087]: 2025-12-05 10:13:00.253 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:13:00 compute-0 nova_compute[257087]: 2025-12-05 10:13:00.254 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:13:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:00 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:01.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:01.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:01 compute-0 ceph-mon[74418]: pgmap v712: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:13:01 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3377508898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v713: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:13:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:01 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:01 compute-0 sudo[261752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:13:01 compute-0 sudo[261752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:01 compute-0 sudo[261752]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:01 compute-0 sudo[261777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:13:01 compute-0 sudo[261777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:02 compute-0 sudo[261777]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:13:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:13:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:13:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:13:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:13:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101302 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:13:02 compute-0 ceph-mon[74418]: pgmap v713: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:13:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:13:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:13:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:13:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:13:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:13:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:13:02 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:13:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:13:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:13:02 compute-0 sudo[261835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:13:02 compute-0 sudo[261835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:02 compute-0 sudo[261835]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:02 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:02 compute-0 sudo[261860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:13:02 compute-0 sudo[261860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:03.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:03.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:03 compute-0 podman[261926]: 2025-12-05 10:13:03.306310741 +0000 UTC m=+0.081640003 container create dee69d3cbae8352f9da5408350ebbc4fd51d9e5efb29e70b883827373fe7a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_neumann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:13:03 compute-0 podman[261926]: 2025-12-05 10:13:03.247024258 +0000 UTC m=+0.022353420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:13:03 compute-0 systemd[1]: Started libpod-conmon-dee69d3cbae8352f9da5408350ebbc4fd51d9e5efb29e70b883827373fe7a791.scope.
Dec 05 10:13:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:13:03 compute-0 podman[261926]: 2025-12-05 10:13:03.448504242 +0000 UTC m=+0.223833414 container init dee69d3cbae8352f9da5408350ebbc4fd51d9e5efb29e70b883827373fe7a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_neumann, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:13:03 compute-0 podman[261926]: 2025-12-05 10:13:03.459497831 +0000 UTC m=+0.234826973 container start dee69d3cbae8352f9da5408350ebbc4fd51d9e5efb29e70b883827373fe7a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 10:13:03 compute-0 keen_neumann[261942]: 167 167
Dec 05 10:13:03 compute-0 systemd[1]: libpod-dee69d3cbae8352f9da5408350ebbc4fd51d9e5efb29e70b883827373fe7a791.scope: Deactivated successfully.
Dec 05 10:13:03 compute-0 podman[261926]: 2025-12-05 10:13:03.476749251 +0000 UTC m=+0.252078423 container attach dee69d3cbae8352f9da5408350ebbc4fd51d9e5efb29e70b883827373fe7a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_neumann, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:13:03 compute-0 podman[261926]: 2025-12-05 10:13:03.478315333 +0000 UTC m=+0.253644535 container died dee69d3cbae8352f9da5408350ebbc4fd51d9e5efb29e70b883827373fe7a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_neumann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:13:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v714: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:13:03 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:13:03 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:13:03 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:13:03 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:13:03 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:13:03 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:13:03 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f2fa711cd921d057bd83ab9e0ce6c4fcfa50d5a416002e6993388bb6eafa88c-merged.mount: Deactivated successfully.
Dec 05 10:13:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:03 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cb4002970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:03.630Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:13:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:03.630Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:13:03 compute-0 podman[261926]: 2025-12-05 10:13:03.72945915 +0000 UTC m=+0.504788322 container remove dee69d3cbae8352f9da5408350ebbc4fd51d9e5efb29e70b883827373fe7a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_neumann, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:13:03 compute-0 systemd[1]: libpod-conmon-dee69d3cbae8352f9da5408350ebbc4fd51d9e5efb29e70b883827373fe7a791.scope: Deactivated successfully.
Dec 05 10:13:03 compute-0 podman[261966]: 2025-12-05 10:13:03.994184715 +0000 UTC m=+0.096574919 container create 06751a0ac6000985ad0bdd440ed91c28edcffa43efc75308b1110301ea0faf44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 10:13:04 compute-0 podman[261966]: 2025-12-05 10:13:03.928973651 +0000 UTC m=+0.031363905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:13:04 compute-0 systemd[1]: Started libpod-conmon-06751a0ac6000985ad0bdd440ed91c28edcffa43efc75308b1110301ea0faf44.scope.
Dec 05 10:13:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2936beb6cf777d3614409269456b21f8fe5a8283a9fcc28ec3ca13a060a10254/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2936beb6cf777d3614409269456b21f8fe5a8283a9fcc28ec3ca13a060a10254/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2936beb6cf777d3614409269456b21f8fe5a8283a9fcc28ec3ca13a060a10254/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2936beb6cf777d3614409269456b21f8fe5a8283a9fcc28ec3ca13a060a10254/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2936beb6cf777d3614409269456b21f8fe5a8283a9fcc28ec3ca13a060a10254/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:04 compute-0 podman[261966]: 2025-12-05 10:13:04.141492005 +0000 UTC m=+0.243882309 container init 06751a0ac6000985ad0bdd440ed91c28edcffa43efc75308b1110301ea0faf44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:13:04 compute-0 podman[261966]: 2025-12-05 10:13:04.148954719 +0000 UTC m=+0.251344963 container start 06751a0ac6000985ad0bdd440ed91c28edcffa43efc75308b1110301ea0faf44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 10:13:04 compute-0 podman[261966]: 2025-12-05 10:13:04.188563707 +0000 UTC m=+0.290953911 container attach 06751a0ac6000985ad0bdd440ed91c28edcffa43efc75308b1110301ea0faf44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_ritchie, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Dec 05 10:13:04 compute-0 hungry_ritchie[261983]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:13:04 compute-0 hungry_ritchie[261983]: --> All data devices are unavailable
Dec 05 10:13:04 compute-0 systemd[1]: libpod-06751a0ac6000985ad0bdd440ed91c28edcffa43efc75308b1110301ea0faf44.scope: Deactivated successfully.
Dec 05 10:13:04 compute-0 podman[262001]: 2025-12-05 10:13:04.611469089 +0000 UTC m=+0.029133365 container died 06751a0ac6000985ad0bdd440ed91c28edcffa43efc75308b1110301ea0faf44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 05 10:13:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2936beb6cf777d3614409269456b21f8fe5a8283a9fcc28ec3ca13a060a10254-merged.mount: Deactivated successfully.
Dec 05 10:13:04 compute-0 ceph-mon[74418]: pgmap v714: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:13:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:04 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:04 compute-0 podman[262001]: 2025-12-05 10:13:04.832502766 +0000 UTC m=+0.250167022 container remove 06751a0ac6000985ad0bdd440ed91c28edcffa43efc75308b1110301ea0faf44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_ritchie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Dec 05 10:13:04 compute-0 systemd[1]: libpod-conmon-06751a0ac6000985ad0bdd440ed91c28edcffa43efc75308b1110301ea0faf44.scope: Deactivated successfully.
Dec 05 10:13:04 compute-0 sudo[261860]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:05 compute-0 sudo[262016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:13:05 compute-0 sudo[262016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:05 compute-0 sudo[262016]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:05 compute-0 sudo[262041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:13:05 compute-0 sudo[262041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:05.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:05.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v715: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:13:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:05 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:05] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Dec 05 10:13:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:05] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Dec 05 10:13:05 compute-0 podman[262109]: 2025-12-05 10:13:05.680404455 +0000 UTC m=+0.101197285 container create cb2456cfb10625257335c046de2511d5ba70784083f9ebb14bee85562393e479 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:13:05 compute-0 podman[262109]: 2025-12-05 10:13:05.615390076 +0000 UTC m=+0.036182886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:13:05 compute-0 systemd[1]: Started libpod-conmon-cb2456cfb10625257335c046de2511d5ba70784083f9ebb14bee85562393e479.scope.
Dec 05 10:13:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:13:05 compute-0 podman[262109]: 2025-12-05 10:13:05.830194253 +0000 UTC m=+0.250987073 container init cb2456cfb10625257335c046de2511d5ba70784083f9ebb14bee85562393e479 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tesla, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:13:05 compute-0 podman[262109]: 2025-12-05 10:13:05.844820701 +0000 UTC m=+0.265613521 container start cb2456cfb10625257335c046de2511d5ba70784083f9ebb14bee85562393e479 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tesla, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:13:05 compute-0 festive_tesla[262126]: 167 167
Dec 05 10:13:05 compute-0 systemd[1]: libpod-cb2456cfb10625257335c046de2511d5ba70784083f9ebb14bee85562393e479.scope: Deactivated successfully.
Dec 05 10:13:05 compute-0 podman[262109]: 2025-12-05 10:13:05.87637357 +0000 UTC m=+0.297166390 container attach cb2456cfb10625257335c046de2511d5ba70784083f9ebb14bee85562393e479 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tesla, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:13:05 compute-0 podman[262109]: 2025-12-05 10:13:05.876899694 +0000 UTC m=+0.297692524 container died cb2456cfb10625257335c046de2511d5ba70784083f9ebb14bee85562393e479 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 05 10:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad8f4b44df8a969dcf3f2fb3c59ca2f48a33d82304065b58768663c0bf7c9f97-merged.mount: Deactivated successfully.
Dec 05 10:13:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c98000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:06 compute-0 podman[262109]: 2025-12-05 10:13:06.153135623 +0000 UTC m=+0.573928433 container remove cb2456cfb10625257335c046de2511d5ba70784083f9ebb14bee85562393e479 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:13:06 compute-0 systemd[1]: libpod-conmon-cb2456cfb10625257335c046de2511d5ba70784083f9ebb14bee85562393e479.scope: Deactivated successfully.
Dec 05 10:13:06 compute-0 podman[262151]: 2025-12-05 10:13:06.385650782 +0000 UTC m=+0.105080252 container create 20a63fcc0a9b284230bc975099c22ae52f48fa4d2aa8893699dca39e86375c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 05 10:13:06 compute-0 podman[262151]: 2025-12-05 10:13:06.326878102 +0000 UTC m=+0.046307632 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:13:06 compute-0 systemd[1]: Started libpod-conmon-20a63fcc0a9b284230bc975099c22ae52f48fa4d2aa8893699dca39e86375c73.scope.
Dec 05 10:13:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ae789c8be315d642a8904571caf48324a37291b9aaf846212bc16d1e68b35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ae789c8be315d642a8904571caf48324a37291b9aaf846212bc16d1e68b35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ae789c8be315d642a8904571caf48324a37291b9aaf846212bc16d1e68b35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ae789c8be315d642a8904571caf48324a37291b9aaf846212bc16d1e68b35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:06 compute-0 podman[262151]: 2025-12-05 10:13:06.529848877 +0000 UTC m=+0.249278437 container init 20a63fcc0a9b284230bc975099c22ae52f48fa4d2aa8893699dca39e86375c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_roentgen, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:13:06 compute-0 podman[262151]: 2025-12-05 10:13:06.540136557 +0000 UTC m=+0.259566057 container start 20a63fcc0a9b284230bc975099c22ae52f48fa4d2aa8893699dca39e86375c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:13:06 compute-0 podman[262151]: 2025-12-05 10:13:06.57256367 +0000 UTC m=+0.291993180 container attach 20a63fcc0a9b284230bc975099c22ae52f48fa4d2aa8893699dca39e86375c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_roentgen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 10:13:06 compute-0 podman[262170]: 2025-12-05 10:13:06.577102763 +0000 UTC m=+0.109741058 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:13:06 compute-0 podman[262172]: 2025-12-05 10:13:06.583168399 +0000 UTC m=+0.115423703 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 05 10:13:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:06 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:06 compute-0 happy_roentgen[262169]: {
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:     "1": [
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:         {
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             "devices": [
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "/dev/loop3"
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             ],
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             "lv_name": "ceph_lv0",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             "lv_size": "21470642176",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             "name": "ceph_lv0",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             "tags": {
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.cluster_name": "ceph",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.crush_device_class": "",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.encrypted": "0",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.osd_id": "1",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.type": "block",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.vdo": "0",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:                 "ceph.with_tpm": "0"
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             },
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             "type": "block",
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:             "vg_name": "ceph_vg0"
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:         }
Dec 05 10:13:06 compute-0 happy_roentgen[262169]:     ]
Dec 05 10:13:06 compute-0 happy_roentgen[262169]: }
Dec 05 10:13:06 compute-0 systemd[1]: libpod-20a63fcc0a9b284230bc975099c22ae52f48fa4d2aa8893699dca39e86375c73.scope: Deactivated successfully.
Dec 05 10:13:06 compute-0 podman[262151]: 2025-12-05 10:13:06.845700275 +0000 UTC m=+0.565129745 container died 20a63fcc0a9b284230bc975099c22ae52f48fa4d2aa8893699dca39e86375c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 10:13:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d9ae789c8be315d642a8904571caf48324a37291b9aaf846212bc16d1e68b35-merged.mount: Deactivated successfully.
Dec 05 10:13:07 compute-0 podman[262151]: 2025-12-05 10:13:07.055132356 +0000 UTC m=+0.774561826 container remove 20a63fcc0a9b284230bc975099c22ae52f48fa4d2aa8893699dca39e86375c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_roentgen, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 10:13:07 compute-0 systemd[1]: libpod-conmon-20a63fcc0a9b284230bc975099c22ae52f48fa4d2aa8893699dca39e86375c73.scope: Deactivated successfully.
Dec 05 10:13:07 compute-0 ceph-mon[74418]: pgmap v715: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 05 10:13:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2516820585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:07 compute-0 sudo[262041]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:07.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:07.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:07 compute-0 sudo[262229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:13:07 compute-0 sudo[262229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:07 compute-0 sudo[262229]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:07 compute-0 sudo[262254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:13:07 compute-0 sudo[262254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:07.339Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:13:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v716: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:13:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:07 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:07 compute-0 podman[262321]: 2025-12-05 10:13:07.701040699 +0000 UTC m=+0.064366183 container create a4287cc8dfbf4ebb9fedb0682f6b20adefc9ab1cf1b53031b50e0f31b2970b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sanderson, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:13:07 compute-0 podman[262321]: 2025-12-05 10:13:07.657769371 +0000 UTC m=+0.021094895 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:13:07 compute-0 systemd[1]: Started libpod-conmon-a4287cc8dfbf4ebb9fedb0682f6b20adefc9ab1cf1b53031b50e0f31b2970b73.scope.
Dec 05 10:13:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:13:07 compute-0 podman[262321]: 2025-12-05 10:13:07.84435671 +0000 UTC m=+0.207682234 container init a4287cc8dfbf4ebb9fedb0682f6b20adefc9ab1cf1b53031b50e0f31b2970b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sanderson, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 10:13:07 compute-0 podman[262321]: 2025-12-05 10:13:07.850427955 +0000 UTC m=+0.213753439 container start a4287cc8dfbf4ebb9fedb0682f6b20adefc9ab1cf1b53031b50e0f31b2970b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sanderson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:13:07 compute-0 unruffled_sanderson[262338]: 167 167
Dec 05 10:13:07 compute-0 systemd[1]: libpod-a4287cc8dfbf4ebb9fedb0682f6b20adefc9ab1cf1b53031b50e0f31b2970b73.scope: Deactivated successfully.
Dec 05 10:13:07 compute-0 podman[262321]: 2025-12-05 10:13:07.864408106 +0000 UTC m=+0.227733620 container attach a4287cc8dfbf4ebb9fedb0682f6b20adefc9ab1cf1b53031b50e0f31b2970b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 05 10:13:07 compute-0 podman[262321]: 2025-12-05 10:13:07.865534866 +0000 UTC m=+0.228860380 container died a4287cc8dfbf4ebb9fedb0682f6b20adefc9ab1cf1b53031b50e0f31b2970b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sanderson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 10:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b98b9b004bb449e8c23184f61736fcc2e123d787c8bb909b0d97a78043daedf9-merged.mount: Deactivated successfully.
Dec 05 10:13:08 compute-0 podman[262321]: 2025-12-05 10:13:08.092485984 +0000 UTC m=+0.455811498 container remove a4287cc8dfbf4ebb9fedb0682f6b20adefc9ab1cf1b53031b50e0f31b2970b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 10:13:08 compute-0 systemd[1]: libpod-conmon-a4287cc8dfbf4ebb9fedb0682f6b20adefc9ab1cf1b53031b50e0f31b2970b73.scope: Deactivated successfully.
Dec 05 10:13:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:08 compute-0 ceph-mon[74418]: pgmap v716: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 05 10:13:08 compute-0 podman[262365]: 2025-12-05 10:13:08.380569956 +0000 UTC m=+0.111104096 container create 8d6924796f9ce9e0d12f159af73c8ead625c72de6ba58368436c30fb29cdc2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 10:13:08 compute-0 podman[262365]: 2025-12-05 10:13:08.311452684 +0000 UTC m=+0.041986874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:13:08 compute-0 systemd[1]: Started libpod-conmon-8d6924796f9ce9e0d12f159af73c8ead625c72de6ba58368436c30fb29cdc2cd.scope.
Dec 05 10:13:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada6df855575c8e43123340f7bd93093a9d35347bef4a72c68a85b89fbfafc45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada6df855575c8e43123340f7bd93093a9d35347bef4a72c68a85b89fbfafc45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada6df855575c8e43123340f7bd93093a9d35347bef4a72c68a85b89fbfafc45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada6df855575c8e43123340f7bd93093a9d35347bef4a72c68a85b89fbfafc45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:08 compute-0 podman[262365]: 2025-12-05 10:13:08.538378491 +0000 UTC m=+0.268912611 container init 8d6924796f9ce9e0d12f159af73c8ead625c72de6ba58368436c30fb29cdc2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:13:08 compute-0 podman[262365]: 2025-12-05 10:13:08.545939387 +0000 UTC m=+0.276473487 container start 8d6924796f9ce9e0d12f159af73c8ead625c72de6ba58368436c30fb29cdc2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:13:08 compute-0 podman[262365]: 2025-12-05 10:13:08.584655481 +0000 UTC m=+0.315189611 container attach 8d6924796f9ce9e0d12f159af73c8ead625c72de6ba58368436c30fb29cdc2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:13:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:08 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c980016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:09.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:09 compute-0 lvm[262456]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:13:09 compute-0 lvm[262456]: VG ceph_vg0 finished
Dec 05 10:13:09 compute-0 great_dijkstra[262382]: {}
Dec 05 10:13:09 compute-0 systemd[1]: libpod-8d6924796f9ce9e0d12f159af73c8ead625c72de6ba58368436c30fb29cdc2cd.scope: Deactivated successfully.
Dec 05 10:13:09 compute-0 systemd[1]: libpod-8d6924796f9ce9e0d12f159af73c8ead625c72de6ba58368436c30fb29cdc2cd.scope: Consumed 1.227s CPU time.
Dec 05 10:13:09 compute-0 podman[262460]: 2025-12-05 10:13:09.452837524 +0000 UTC m=+0.029471364 container died 8d6924796f9ce9e0d12f159af73c8ead625c72de6ba58368436c30fb29cdc2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 10:13:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v717: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 7 op/s
Dec 05 10:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ada6df855575c8e43123340f7bd93093a9d35347bef4a72c68a85b89fbfafc45-merged.mount: Deactivated successfully.
Dec 05 10:13:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:09 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40030a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:09 compute-0 podman[262460]: 2025-12-05 10:13:09.729095093 +0000 UTC m=+0.305728913 container remove 8d6924796f9ce9e0d12f159af73c8ead625c72de6ba58368436c30fb29cdc2cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:13:09 compute-0 systemd[1]: libpod-conmon-8d6924796f9ce9e0d12f159af73c8ead625c72de6ba58368436c30fb29cdc2cd.scope: Deactivated successfully.
Dec 05 10:13:09 compute-0 sudo[262254]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:13:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:13:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:13:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:13:10 compute-0 sudo[262476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:13:10 compute-0 sudo[262476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:10 compute-0 sudo[262476]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:10 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:10 compute-0 ceph-mon[74418]: pgmap v717: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 7 op/s
Dec 05 10:13:10 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:13:10 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:13:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec 05 10:13:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec 05 10:13:10 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec 05 10:13:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:11.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:11 compute-0 podman[262503]: 2025-12-05 10:13:11.466416095 +0000 UTC m=+0.116783501 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:13:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v719: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Dec 05 10:13:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c980016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:11 compute-0 sudo[262530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:13:11 compute-0 sudo[262530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:11 compute-0 sudo[262530]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:11 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:13:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec 05 10:13:12 compute-0 ceph-mon[74418]: osdmap e137: 3 total, 3 up, 3 in
Dec 05 10:13:12 compute-0 ceph-mon[74418]: pgmap v719: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Dec 05 10:13:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec 05 10:13:12 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec 05 10:13:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40030c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:13:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:13:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:12 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:13 compute-0 ceph-mon[74418]: osdmap e138: 3 total, 3 up, 3 in
Dec 05 10:13:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:13:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:13.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:13.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v721: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Dec 05 10:13:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:13 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:13.631Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:13:14 compute-0 ceph-mon[74418]: pgmap v721: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Dec 05 10:13:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c980016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca40030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:13:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:14 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:13:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:13:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:15.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:15.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:15 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/319641522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:13:15 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2865374272' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:13:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v722: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Dec 05 10:13:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:15 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0005010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:15] "GET /metrics HTTP/1.1" 200 48488 "" "Prometheus/2.51.0"
Dec 05 10:13:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:15] "GET /metrics HTTP/1.1" 200 48488 "" "Prometheus/2.51.0"
Dec 05 10:13:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:16 compute-0 ceph-mon[74418]: pgmap v722: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Dec 05 10:13:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:16 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c98002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:17.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:17.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:17.340Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:13:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v723: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Dec 05 10:13:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:17 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:13:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0005010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec 05 10:13:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec 05 10:13:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec 05 10:13:18 compute-0 ceph-mon[74418]: pgmap v723: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Dec 05 10:13:18 compute-0 ceph-mon[74418]: osdmap e139: 3 total, 3 up, 3 in
Dec 05 10:13:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:18 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:13:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:19.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:13:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:19.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v725: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 MiB/s wr, 60 op/s
Dec 05 10:13:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:19 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c98002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:13:20.571 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:13:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:13:20.571 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:13:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:13:20.571 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:13:20 compute-0 ceph-mon[74418]: pgmap v725: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 MiB/s wr, 60 op/s
Dec 05 10:13:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:20 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0005010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:21.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:21.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.3 MiB/s wr, 50 op/s
Dec 05 10:13:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:21 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c98002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:22 compute-0 ceph-mon[74418]: pgmap v726: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.3 MiB/s wr, 50 op/s
Dec 05 10:13:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:22 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ca4003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.147822) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929603147999, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1511, "num_deletes": 255, "total_data_size": 2815912, "memory_usage": 2863280, "flush_reason": "Manual Compaction"}
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec 05 10:13:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:23.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929603170298, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2754815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22500, "largest_seqno": 24009, "table_properties": {"data_size": 2747649, "index_size": 4173, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14676, "raw_average_key_size": 19, "raw_value_size": 2733203, "raw_average_value_size": 3654, "num_data_blocks": 179, "num_entries": 748, "num_filter_entries": 748, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929471, "oldest_key_time": 1764929471, "file_creation_time": 1764929603, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 22527 microseconds, and 10774 cpu microseconds.
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.170352) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2754815 bytes OK
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.170384) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.179042) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.179100) EVENT_LOG_v1 {"time_micros": 1764929603179087, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.179134) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2809376, prev total WAL file size 2809376, number of live WAL files 2.
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.180866) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2690KB)], [50(10MB)]
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929603180974, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 14125946, "oldest_snapshot_seqno": -1}
Dec 05 10:13:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:23.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5533 keys, 13924169 bytes, temperature: kUnknown
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929603327912, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13924169, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13886448, "index_size": 22763, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 141549, "raw_average_key_size": 25, "raw_value_size": 13785331, "raw_average_value_size": 2491, "num_data_blocks": 926, "num_entries": 5533, "num_filter_entries": 5533, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764929603, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.328174) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13924169 bytes
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.344429) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.1 rd, 94.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.8 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(10.2) write-amplify(5.1) OK, records in: 6065, records dropped: 532 output_compression: NoCompression
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.344451) EVENT_LOG_v1 {"time_micros": 1764929603344441, "job": 26, "event": "compaction_finished", "compaction_time_micros": 147025, "compaction_time_cpu_micros": 33676, "output_level": 6, "num_output_files": 1, "total_output_size": 13924169, "num_input_records": 6065, "num_output_records": 5533, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929603345620, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929603349869, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.180729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.349971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.349977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.349980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.349983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:23 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:13:23.349987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Dec 05 10:13:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:23.632Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:13:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:23 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0005010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:24 compute-0 ceph-mon[74418]: pgmap v727: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Dec 05 10:13:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101324 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:13:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:24 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c98003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:25.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:25.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v728: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 91 op/s
Dec 05 10:13:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:25 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c98003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:25] "GET /metrics HTTP/1.1" 200 48488 "" "Prometheus/2.51.0"
Dec 05 10:13:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:25] "GET /metrics HTTP/1.1" 200 48488 "" "Prometheus/2.51.0"
Dec 05 10:13:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:13:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.7 total, 600.0 interval
                                           Cumulative writes: 5229 writes, 23K keys, 5229 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 5229 writes, 5229 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1484 writes, 6668 keys, 1484 commit groups, 1.0 writes per commit group, ingest: 11.43 MB, 0.02 MB/s
                                           Interval WAL: 1484 writes, 1484 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     36.0      1.00              0.21        13    0.077       0      0       0.0       0.0
                                             L6      1/0   13.28 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   4.2     65.4     56.9      2.65              0.60        12    0.221     62K   6197       0.0       0.0
                                            Sum      1/0   13.28 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.2     47.5     51.2      3.64              0.81        25    0.146     62K   6197       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3     58.0     58.5      1.47              0.30        12    0.122     34K   3014       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     65.4     56.9      2.65              0.60        12    0.221     62K   6197       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     67.3      0.53              0.21        12    0.044       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.46              0.00         1    0.463       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.7 total, 600.0 interval
                                           Flush(GB): cumulative 0.035, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.18 GB write, 0.10 MB/s write, 0.17 GB read, 0.10 MB/s read, 3.6 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585d4f19350#2 capacity: 304.00 MB usage: 11.87 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000195 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(626,11.36 MB,3.7375%) FilterBlock(26,187.11 KB,0.0601066%) IndexBlock(26,332.75 KB,0.106892%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 05 10:13:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7cd0005010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:26 compute-0 ceph-mon[74418]: pgmap v728: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 91 op/s
Dec 05 10:13:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:26 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0047a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:26 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:13:26 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:13:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:27.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:27.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:27.342Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:13:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:27.342Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:13:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:27.342Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v729: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 91 op/s
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:13:27
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'backups', 'default.rgw.meta', 'images', '.rgw.root', 'volumes', 'default.rgw.control']
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:13:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:13:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:13:27 compute-0 kernel: ganesha.nfsd[261725]: segfault at 50 ip 00007f7d8a40632e sp 00007f7d5dffa210 error 4 in libntirpc.so.5.8[7f7d8a3eb000+2c000] likely on CPU 2 (core 0, socket 2)
Dec 05 10:13:27 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 05 10:13:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[224724]: 05/12/2025 10:13:27 : epoch 6932ae1e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7c9c0047a0 fd 48 proxy ignored for local
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:13:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:13:27 compute-0 systemd[1]: Started Process Core Dump (PID 262572/UID 0).
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:13:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:13:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:28 compute-0 ceph-mon[74418]: pgmap v729: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 91 op/s
Dec 05 10:13:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:29.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:29.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v730: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 80 op/s
Dec 05 10:13:29 compute-0 systemd-coredump[262573]: Process 224735 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 90:
                                                    #0  0x00007f7d8a40632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 05 10:13:29 compute-0 systemd[1]: systemd-coredump@5-262572-0.service: Deactivated successfully.
Dec 05 10:13:29 compute-0 systemd[1]: systemd-coredump@5-262572-0.service: Consumed 1.931s CPU time.
Dec 05 10:13:29 compute-0 podman[262580]: 2025-12-05 10:13:29.789929244 +0000 UTC m=+0.043312860 container died 35030d0766c4ac8c848462d73c3403b87a7518f0d5fc5abd6496a97acf4318c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-dac215173f0155d4f8368345719bb625565f388c4755fcbc8d90946437730257-merged.mount: Deactivated successfully.
Dec 05 10:13:30 compute-0 podman[262580]: 2025-12-05 10:13:30.031680835 +0000 UTC m=+0.285064451 container remove 35030d0766c4ac8c848462d73c3403b87a7518f0d5fc5abd6496a97acf4318c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec 05 10:13:30 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Main process exited, code=exited, status=139/n/a
Dec 05 10:13:30 compute-0 ceph-mon[74418]: pgmap v730: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 80 op/s
Dec 05 10:13:30 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Failed with result 'exit-code'.
Dec 05 10:13:30 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 3.005s CPU time.
Dec 05 10:13:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:31.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:31.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v731: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 64 op/s
Dec 05 10:13:31 compute-0 sudo[262625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:13:31 compute-0 sudo[262625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:31 compute-0 sudo[262625]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:32 compute-0 ceph-mon[74418]: pgmap v731: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 64 op/s
Dec 05 10:13:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:33.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:33.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 64 op/s
Dec 05 10:13:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:33.634Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:13:34 compute-0 ceph-mon[74418]: pgmap v732: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 64 op/s
Dec 05 10:13:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:35.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:35.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v733: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Dec 05 10:13:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:35] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:13:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101335 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:13:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:35] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:13:36 compute-0 ceph-mon[74418]: pgmap v733: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Dec 05 10:13:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:37.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:37.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:37.343Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:13:37 compute-0 podman[262656]: 2025-12-05 10:13:37.406757339 +0000 UTC m=+0.069282266 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:13:37 compute-0 podman[262657]: 2025-12-05 10:13:37.441145416 +0000 UTC m=+0.097458694 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 05 10:13:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v734: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:13:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:38 compute-0 ceph-mon[74418]: pgmap v734: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:13:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:39.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:39.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:13:40 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Scheduled restart job, restart counter is at 6.
Dec 05 10:13:40 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 10:13:40 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 3.005s CPU time.
Dec 05 10:13:40 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 10:13:40 compute-0 podman[262747]: 2025-12-05 10:13:40.677841721 +0000 UTC m=+0.111241769 container create f23716e3400f7b56d94742c9cddefa386baa016571b977e3b310f244f93705bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:13:40 compute-0 podman[262747]: 2025-12-05 10:13:40.59510999 +0000 UTC m=+0.028510048 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d215c261b34711275969e3ef0acfb602abd6966547f48400d06888bbe5422d37/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d215c261b34711275969e3ef0acfb602abd6966547f48400d06888bbe5422d37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d215c261b34711275969e3ef0acfb602abd6966547f48400d06888bbe5422d37/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d215c261b34711275969e3ef0acfb602abd6966547f48400d06888bbe5422d37/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hocvro-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:40 compute-0 ceph-mon[74418]: pgmap v735: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:13:40 compute-0 podman[262747]: 2025-12-05 10:13:40.738038331 +0000 UTC m=+0.171438399 container init f23716e3400f7b56d94742c9cddefa386baa016571b977e3b310f244f93705bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 10:13:40 compute-0 podman[262747]: 2025-12-05 10:13:40.743304784 +0000 UTC m=+0.176704812 container start f23716e3400f7b56d94742c9cddefa386baa016571b977e3b310f244f93705bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 10:13:40 compute-0 bash[262747]: f23716e3400f7b56d94742c9cddefa386baa016571b977e3b310f244f93705bc
Dec 05 10:13:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:40 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 05 10:13:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:40 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 05 10:13:40 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 10:13:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:40 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 05 10:13:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:40 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 05 10:13:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:40 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 05 10:13:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:40 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 05 10:13:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:40 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 05 10:13:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:40 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:13:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:41.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:41.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:13:42 compute-0 ceph-mon[74418]: pgmap v736: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:13:42 compute-0 podman[262805]: 2025-12-05 10:13:42.414679379 +0000 UTC m=+0.085522958 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:13:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:13:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:13:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:13:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:43.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:13:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:43.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:13:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v737: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:13:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:43.637Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:13:44 compute-0 ceph-mon[74418]: pgmap v737: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:13:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:45.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v738: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 05 10:13:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:45] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Dec 05 10:13:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:45] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Dec 05 10:13:46 compute-0 ceph-mon[74418]: pgmap v738: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 05 10:13:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:46 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:13:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:46 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:13:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:47.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:47.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:47.345Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:13:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:47.345Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:13:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v739: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 12 KiB/s wr, 1 op/s
Dec 05 10:13:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:48 compute-0 ceph-mon[74418]: pgmap v739: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 12 KiB/s wr, 1 op/s
Dec 05 10:13:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:49.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:49.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 16 KiB/s wr, 4 op/s
Dec 05 10:13:50 compute-0 ceph-mon[74418]: pgmap v740: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 16 KiB/s wr, 4 op/s
Dec 05 10:13:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:51.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:51.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v741: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 4.2 KiB/s wr, 3 op/s
Dec 05 10:13:51 compute-0 sudo[262841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:13:51 compute-0 sudo[262841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:13:51 compute-0 sudo[262841]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:52 compute-0 ceph-mon[74418]: pgmap v741: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 4.2 KiB/s wr, 3 op/s
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 05 10:13:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 05 10:13:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:13:53 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2825985224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:53.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:13:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:53.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:13:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:53.639Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:13:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:53.639Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:13:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:53 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 4.2 KiB/s wr, 3 op/s
Dec 05 10:13:53 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2825985224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:54 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1814001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:54 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:54 compute-0 ceph-mon[74418]: pgmap v742: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 4.2 KiB/s wr, 3 op/s
Dec 05 10:13:55 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:13:55.010 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:13:55 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:13:55.012 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:13:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:55.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:55.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:55] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Dec 05 10:13:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:13:55] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Dec 05 10:13:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101355 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 05 10:13:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:55 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 5.3 KiB/s wr, 4 op/s
Dec 05 10:13:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3874269426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:56 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:56 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18140025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:56 compute-0 ceph-mon[74418]: pgmap v743: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 5.3 KiB/s wr, 4 op/s
Dec 05 10:13:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4282086156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:57.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:57 compute-0 nova_compute[257087]: 2025-12-05 10:13:57.251 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:57 compute-0 nova_compute[257087]: 2025-12-05 10:13:57.252 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:57.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:57 compute-0 nova_compute[257087]: 2025-12-05 10:13:57.268 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:57 compute-0 nova_compute[257087]: 2025-12-05 10:13:57.268 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:13:57 compute-0 nova_compute[257087]: 2025-12-05 10:13:57.268 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:13:57 compute-0 nova_compute[257087]: 2025-12-05 10:13:57.279 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:13:57 compute-0 nova_compute[257087]: 2025-12-05 10:13:57.279 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:57 compute-0 nova_compute[257087]: 2025-12-05 10:13:57.279 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:13:57.346Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:13:57 compute-0 nova_compute[257087]: 2025-12-05 10:13:57.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:13:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:13:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:13:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:13:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:13:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:13:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:13:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:13:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:57 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 5.1 KiB/s wr, 3 op/s
Dec 05 10:13:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2879040564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3319280645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:13:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3319280645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:13:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:13:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3554238025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:58 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:13:58.016 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:13:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:58 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:58 compute-0 nova_compute[257087]: 2025-12-05 10:13:58.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:58 compute-0 nova_compute[257087]: 2025-12-05 10:13:58.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:58 compute-0 nova_compute[257087]: 2025-12-05 10:13:58.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:58 compute-0 nova_compute[257087]: 2025-12-05 10:13:58.613 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:13:58 compute-0 nova_compute[257087]: 2025-12-05 10:13:58.613 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:13:58 compute-0 nova_compute[257087]: 2025-12-05 10:13:58.614 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:13:58 compute-0 nova_compute[257087]: 2025-12-05 10:13:58.615 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:13:58 compute-0 nova_compute[257087]: 2025-12-05 10:13:58.616 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:13:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:58 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:13:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3504017711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:59 compute-0 ceph-mon[74418]: pgmap v744: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 5.1 KiB/s wr, 3 op/s
Dec 05 10:13:59 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/934986874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:59 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1300403048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:13:59 compute-0 nova_compute[257087]: 2025-12-05 10:13:59.162 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:13:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:13:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:13:59.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:13:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:13:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:13:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:13:59.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:13:59 compute-0 nova_compute[257087]: 2025-12-05 10:13:59.397 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:13:59 compute-0 nova_compute[257087]: 2025-12-05 10:13:59.398 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4890MB free_disk=59.94270324707031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:13:59 compute-0 nova_compute[257087]: 2025-12-05 10:13:59.398 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:13:59 compute-0 nova_compute[257087]: 2025-12-05 10:13:59.399 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:13:59 compute-0 nova_compute[257087]: 2025-12-05 10:13:59.465 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:13:59 compute-0 nova_compute[257087]: 2025-12-05 10:13:59.465 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:13:59 compute-0 nova_compute[257087]: 2025-12-05 10:13:59.490 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:13:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:13:59 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18140025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:13:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 05 10:13:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:13:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592546199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:13:59 compute-0 nova_compute[257087]: 2025-12-05 10:13:59.967 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:13:59 compute-0 nova_compute[257087]: 2025-12-05 10:13:59.978 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:14:00 compute-0 nova_compute[257087]: 2025-12-05 10:13:59.999 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:14:00 compute-0 nova_compute[257087]: 2025-12-05 10:14:00.004 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:14:00 compute-0 nova_compute[257087]: 2025-12-05 10:14:00.005 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:14:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:00 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1920220146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:14:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3504017711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:14:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/665993859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:14:00 compute-0 ceph-mon[74418]: pgmap v745: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 05 10:14:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3592546199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:14:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:00 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:01 compute-0 nova_compute[257087]: 2025-12-05 10:14:01.005 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:01 compute-0 nova_compute[257087]: 2025-12-05 10:14:01.005 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:14:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:01.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:01.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:01 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 05 10:14:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:02 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18140025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:02 compute-0 ceph-mon[74418]: pgmap v746: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 05 10:14:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:02 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:03.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:03.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:03.640Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:14:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:03 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 05 10:14:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:04 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:04 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18140025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:04 compute-0 ceph-mon[74418]: pgmap v747: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 05 10:14:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:05.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:05.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:05] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Dec 05 10:14:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:05] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Dec 05 10:14:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:05 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v748: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 66 op/s
Dec 05 10:14:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:06 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:06 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c0096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:07 compute-0 ceph-mon[74418]: pgmap v748: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 66 op/s
Dec 05 10:14:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:14:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:07.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:14:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:07.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:07.347Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:14:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:07 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec 05 10:14:08 compute-0 ceph-mon[74418]: pgmap v749: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec 05 10:14:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:08 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18140025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:08 compute-0 podman[262942]: 2025-12-05 10:14:08.434398764 +0000 UTC m=+0.077194782 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 10:14:08 compute-0 podman[262943]: 2025-12-05 10:14:08.440546621 +0000 UTC m=+0.083342149 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Dec 05 10:14:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:08 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:09.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:09.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:09 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c0096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v750: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 05 10:14:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:10 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:10 compute-0 sudo[262978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:14:10 compute-0 sudo[262978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:10 compute-0 sudo[262978]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:10 compute-0 sudo[263003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 10:14:10 compute-0 sudo[263003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:10 compute-0 ceph-mon[74418]: pgmap v750: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 05 10:14:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:10 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18140025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:11 compute-0 podman[263102]: 2025-12-05 10:14:11.080270407 +0000 UTC m=+0.104450064 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:14:11 compute-0 podman[263102]: 2025-12-05 10:14:11.174364568 +0000 UTC m=+0.198544195 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 05 10:14:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:11.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:11.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:11 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 05 10:14:11 compute-0 podman[263220]: 2025-12-05 10:14:11.75228271 +0000 UTC m=+0.178086909 container exec 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:14:11 compute-0 podman[263220]: 2025-12-05 10:14:11.76004957 +0000 UTC m=+0.185853759 container exec_died 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:14:11 compute-0 sudo[263279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:14:11 compute-0 sudo[263279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:11 compute-0 sudo[263279]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:12 compute-0 podman[263338]: 2025-12-05 10:14:12.103731326 +0000 UTC m=+0.051380569 container exec f23716e3400f7b56d94742c9cddefa386baa016571b977e3b310f244f93705bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 10:14:12 compute-0 podman[263338]: 2025-12-05 10:14:12.115560388 +0000 UTC m=+0.063209611 container exec_died f23716e3400f7b56d94742c9cddefa386baa016571b977e3b310f244f93705bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 05 10:14:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:12 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c0096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:12 compute-0 podman[263405]: 2025-12-05 10:14:12.305876018 +0000 UTC m=+0.050427203 container exec d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 10:14:12 compute-0 podman[263405]: 2025-12-05 10:14:12.318615426 +0000 UTC m=+0.063166591 container exec_died d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 10:14:12 compute-0 podman[263474]: 2025-12-05 10:14:12.560649444 +0000 UTC m=+0.078794406 container exec f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:14:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:14:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:14:12 compute-0 podman[263474]: 2025-12-05 10:14:12.582810377 +0000 UTC m=+0.100955329 container exec_died f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public)
Dec 05 10:14:12 compute-0 podman[263508]: 2025-12-05 10:14:12.764676307 +0000 UTC m=+0.105342499 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:14:12 compute-0 ceph-mon[74418]: pgmap v751: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 05 10:14:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:14:12 compute-0 podman[263564]: 2025-12-05 10:14:12.821126744 +0000 UTC m=+0.054918595 container exec a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:14:12 compute-0 podman[263564]: 2025-12-05 10:14:12.856904898 +0000 UTC m=+0.090696729 container exec_died a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:14:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:12 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c0096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:13 compute-0 podman[263641]: 2025-12-05 10:14:13.076923948 +0000 UTC m=+0.052507361 container exec 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 10:14:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:13.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:14:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:14:13 compute-0 podman[263641]: 2025-12-05 10:14:13.320132557 +0000 UTC m=+0.295715960 container exec_died 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 10:14:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:13.642Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:14:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:13 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c0096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 05 10:14:13 compute-0 podman[263755]: 2025-12-05 10:14:13.991147193 +0000 UTC m=+0.365993504 container exec 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:14:14 compute-0 podman[263755]: 2025-12-05 10:14:14.060333287 +0000 UTC m=+0.435179608 container exec_died 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:14:14 compute-0 sudo[263003]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:14:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:14:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:14 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:14 compute-0 sudo[263799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:14:14 compute-0 sudo[263799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:14 compute-0 sudo[263799]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:14 compute-0 sudo[263825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:14:14 compute-0 sudo[263825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:14 compute-0 sudo[263825]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:14:14 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:14:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:14:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:14:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:14:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:14:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:14 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:14:14 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:14:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:14:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:14:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:14:14 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:14:14 compute-0 sudo[263882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:14:14 compute-0 sudo[263882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:14 compute-0 sudo[263882]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:14 compute-0 ceph-mon[74418]: pgmap v752: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 05 10:14:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:14:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:14:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:14:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:14:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:14:15 compute-0 sudo[263907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:14:15 compute-0 sudo[263907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:15 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 05 10:14:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:15.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:15 compute-0 podman[263972]: 2025-12-05 10:14:15.429004522 +0000 UTC m=+0.028545837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:14:15 compute-0 podman[263972]: 2025-12-05 10:14:15.63940136 +0000 UTC m=+0.238942625 container create ef1bd54f37441c02eb0c43abd40487339d1a6b1b33e3369c314320b46f37b8c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:14:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:15] "GET /metrics HTTP/1.1" 200 48544 "" "Prometheus/2.51.0"
Dec 05 10:14:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:15] "GET /metrics HTTP/1.1" 200 48544 "" "Prometheus/2.51.0"
Dec 05 10:14:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:15 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c0096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:15 compute-0 systemd[1]: Started libpod-conmon-ef1bd54f37441c02eb0c43abd40487339d1a6b1b33e3369c314320b46f37b8c1.scope.
Dec 05 10:14:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v753: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 73 op/s
Dec 05 10:14:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:14:15 compute-0 podman[263972]: 2025-12-05 10:14:15.734379535 +0000 UTC m=+0.333920840 container init ef1bd54f37441c02eb0c43abd40487339d1a6b1b33e3369c314320b46f37b8c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 10:14:15 compute-0 podman[263972]: 2025-12-05 10:14:15.742405964 +0000 UTC m=+0.341947239 container start ef1bd54f37441c02eb0c43abd40487339d1a6b1b33e3369c314320b46f37b8c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 10:14:15 compute-0 podman[263972]: 2025-12-05 10:14:15.745273592 +0000 UTC m=+0.344814877 container attach ef1bd54f37441c02eb0c43abd40487339d1a6b1b33e3369c314320b46f37b8c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 10:14:15 compute-0 quizzical_carson[263988]: 167 167
Dec 05 10:14:15 compute-0 systemd[1]: libpod-ef1bd54f37441c02eb0c43abd40487339d1a6b1b33e3369c314320b46f37b8c1.scope: Deactivated successfully.
Dec 05 10:14:15 compute-0 podman[263972]: 2025-12-05 10:14:15.750095983 +0000 UTC m=+0.349637258 container died ef1bd54f37441c02eb0c43abd40487339d1a6b1b33e3369c314320b46f37b8c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-48bd9cd01061f1938f3530d5c69fbbe1d7c8f2ebed1dcff0b47ae62780387106-merged.mount: Deactivated successfully.
Dec 05 10:14:15 compute-0 podman[263972]: 2025-12-05 10:14:15.788014075 +0000 UTC m=+0.387555350 container remove ef1bd54f37441c02eb0c43abd40487339d1a6b1b33e3369c314320b46f37b8c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:14:15 compute-0 systemd[1]: libpod-conmon-ef1bd54f37441c02eb0c43abd40487339d1a6b1b33e3369c314320b46f37b8c1.scope: Deactivated successfully.
Dec 05 10:14:16 compute-0 podman[264012]: 2025-12-05 10:14:15.93109276 +0000 UTC m=+0.027908951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:14:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:16 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18140025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:16 compute-0 podman[264012]: 2025-12-05 10:14:16.445218985 +0000 UTC m=+0.542035166 container create e3204173ac82aefe28fb2a1cc45270d895ab3d039a7307acd650c9988ee5c3c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mayer, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:14:16 compute-0 systemd[1]: Started libpod-conmon-e3204173ac82aefe28fb2a1cc45270d895ab3d039a7307acd650c9988ee5c3c7.scope.
Dec 05 10:14:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcf35ea5f87dd861c52d1fbcad7bf71da2a40d8029ef6ab8da33ce1d10d8759/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcf35ea5f87dd861c52d1fbcad7bf71da2a40d8029ef6ab8da33ce1d10d8759/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcf35ea5f87dd861c52d1fbcad7bf71da2a40d8029ef6ab8da33ce1d10d8759/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcf35ea5f87dd861c52d1fbcad7bf71da2a40d8029ef6ab8da33ce1d10d8759/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcf35ea5f87dd861c52d1fbcad7bf71da2a40d8029ef6ab8da33ce1d10d8759/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:16 compute-0 podman[264012]: 2025-12-05 10:14:16.540641462 +0000 UTC m=+0.637457643 container init e3204173ac82aefe28fb2a1cc45270d895ab3d039a7307acd650c9988ee5c3c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 10:14:16 compute-0 podman[264012]: 2025-12-05 10:14:16.549437132 +0000 UTC m=+0.646253303 container start e3204173ac82aefe28fb2a1cc45270d895ab3d039a7307acd650c9988ee5c3c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mayer, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:14:16 compute-0 podman[264012]: 2025-12-05 10:14:16.552440313 +0000 UTC m=+0.649256484 container attach e3204173ac82aefe28fb2a1cc45270d895ab3d039a7307acd650c9988ee5c3c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 10:14:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:16 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18140025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:16 compute-0 unruffled_mayer[264030]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:14:16 compute-0 unruffled_mayer[264030]: --> All data devices are unavailable
Dec 05 10:14:16 compute-0 systemd[1]: libpod-e3204173ac82aefe28fb2a1cc45270d895ab3d039a7307acd650c9988ee5c3c7.scope: Deactivated successfully.
Dec 05 10:14:16 compute-0 podman[264045]: 2025-12-05 10:14:16.976935519 +0000 UTC m=+0.043512406 container died e3204173ac82aefe28fb2a1cc45270d895ab3d039a7307acd650c9988ee5c3c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mayer, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfcf35ea5f87dd861c52d1fbcad7bf71da2a40d8029ef6ab8da33ce1d10d8759-merged.mount: Deactivated successfully.
Dec 05 10:14:17 compute-0 podman[264045]: 2025-12-05 10:14:17.018380216 +0000 UTC m=+0.084957103 container remove e3204173ac82aefe28fb2a1cc45270d895ab3d039a7307acd650c9988ee5c3c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:14:17 compute-0 systemd[1]: libpod-conmon-e3204173ac82aefe28fb2a1cc45270d895ab3d039a7307acd650c9988ee5c3c7.scope: Deactivated successfully.
Dec 05 10:14:17 compute-0 sudo[263907]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:17 compute-0 sudo[264060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:14:17 compute-0 sudo[264060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:17 compute-0 sudo[264060]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:17 compute-0 sudo[264085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:14:17 compute-0 sudo[264085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:17.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:17.348Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:14:17 compute-0 ceph-mon[74418]: pgmap v753: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 73 op/s
Dec 05 10:14:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec 05 10:14:17 compute-0 podman[264148]: 2025-12-05 10:14:17.650306888 +0000 UTC m=+0.038806797 container create 8192968698fb15a9e0f4ace42ad7a14af51a4fda26ae68a7a5e91bcc9a70dd10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Dec 05 10:14:17 compute-0 systemd[1]: Started libpod-conmon-8192968698fb15a9e0f4ace42ad7a14af51a4fda26ae68a7a5e91bcc9a70dd10.scope.
Dec 05 10:14:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:17 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:14:17 compute-0 podman[264148]: 2025-12-05 10:14:17.713420117 +0000 UTC m=+0.101920016 container init 8192968698fb15a9e0f4ace42ad7a14af51a4fda26ae68a7a5e91bcc9a70dd10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 10:14:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.7 KiB/s wr, 43 op/s
Dec 05 10:14:17 compute-0 podman[264148]: 2025-12-05 10:14:17.720201911 +0000 UTC m=+0.108701800 container start 8192968698fb15a9e0f4ace42ad7a14af51a4fda26ae68a7a5e91bcc9a70dd10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 05 10:14:17 compute-0 podman[264148]: 2025-12-05 10:14:17.723610303 +0000 UTC m=+0.112110242 container attach 8192968698fb15a9e0f4ace42ad7a14af51a4fda26ae68a7a5e91bcc9a70dd10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:14:17 compute-0 sweet_taussig[264164]: 167 167
Dec 05 10:14:17 compute-0 systemd[1]: libpod-8192968698fb15a9e0f4ace42ad7a14af51a4fda26ae68a7a5e91bcc9a70dd10.scope: Deactivated successfully.
Dec 05 10:14:17 compute-0 podman[264148]: 2025-12-05 10:14:17.725454454 +0000 UTC m=+0.113954353 container died 8192968698fb15a9e0f4ace42ad7a14af51a4fda26ae68a7a5e91bcc9a70dd10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:14:17 compute-0 podman[264148]: 2025-12-05 10:14:17.633441089 +0000 UTC m=+0.021940998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a66b0faa529522f2a0722bae60055e04592abc7410ceb75a3a7562313fe2ed8-merged.mount: Deactivated successfully.
Dec 05 10:14:17 compute-0 podman[264148]: 2025-12-05 10:14:17.769635867 +0000 UTC m=+0.158135796 container remove 8192968698fb15a9e0f4ace42ad7a14af51a4fda26ae68a7a5e91bcc9a70dd10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 05 10:14:17 compute-0 systemd[1]: libpod-conmon-8192968698fb15a9e0f4ace42ad7a14af51a4fda26ae68a7a5e91bcc9a70dd10.scope: Deactivated successfully.
Dec 05 10:14:17 compute-0 podman[264189]: 2025-12-05 10:14:17.986580212 +0000 UTC m=+0.055650186 container create 7166eda2605f524a73aca7cec566accc7c7d9b6601038bde7a2cdfd8b5e8686b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:14:18 compute-0 systemd[1]: Started libpod-conmon-7166eda2605f524a73aca7cec566accc7c7d9b6601038bde7a2cdfd8b5e8686b.scope.
Dec 05 10:14:18 compute-0 podman[264189]: 2025-12-05 10:14:17.964659265 +0000 UTC m=+0.033729249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:14:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6e0a8787b01a4445e364c785f949a10d7fe520e00698104d69ba70d545dc82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6e0a8787b01a4445e364c785f949a10d7fe520e00698104d69ba70d545dc82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6e0a8787b01a4445e364c785f949a10d7fe520e00698104d69ba70d545dc82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6e0a8787b01a4445e364c785f949a10d7fe520e00698104d69ba70d545dc82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:18 compute-0 podman[264189]: 2025-12-05 10:14:18.101218392 +0000 UTC m=+0.170288376 container init 7166eda2605f524a73aca7cec566accc7c7d9b6601038bde7a2cdfd8b5e8686b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Dec 05 10:14:18 compute-0 podman[264189]: 2025-12-05 10:14:18.109376444 +0000 UTC m=+0.178446408 container start 7166eda2605f524a73aca7cec566accc7c7d9b6601038bde7a2cdfd8b5e8686b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:14:18 compute-0 podman[264189]: 2025-12-05 10:14:18.112816458 +0000 UTC m=+0.181886452 container attach 7166eda2605f524a73aca7cec566accc7c7d9b6601038bde7a2cdfd8b5e8686b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 10:14:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:18 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]: {
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:     "1": [
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:         {
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             "devices": [
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "/dev/loop3"
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             ],
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             "lv_name": "ceph_lv0",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             "lv_size": "21470642176",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             "name": "ceph_lv0",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             "tags": {
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.cluster_name": "ceph",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.crush_device_class": "",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.encrypted": "0",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.osd_id": "1",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.type": "block",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.vdo": "0",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:                 "ceph.with_tpm": "0"
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             },
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             "type": "block",
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:             "vg_name": "ceph_vg0"
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:         }
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]:     ]
Dec 05 10:14:18 compute-0 hopeful_feynman[264205]: }
Dec 05 10:14:18 compute-0 ceph-mon[74418]: pgmap v754: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.7 KiB/s wr, 43 op/s
Dec 05 10:14:18 compute-0 systemd[1]: libpod-7166eda2605f524a73aca7cec566accc7c7d9b6601038bde7a2cdfd8b5e8686b.scope: Deactivated successfully.
Dec 05 10:14:18 compute-0 podman[264189]: 2025-12-05 10:14:18.409112723 +0000 UTC m=+0.478182717 container died 7166eda2605f524a73aca7cec566accc7c7d9b6601038bde7a2cdfd8b5e8686b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 10:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab6e0a8787b01a4445e364c785f949a10d7fe520e00698104d69ba70d545dc82-merged.mount: Deactivated successfully.
Dec 05 10:14:18 compute-0 podman[264189]: 2025-12-05 10:14:18.462356843 +0000 UTC m=+0.531426817 container remove 7166eda2605f524a73aca7cec566accc7c7d9b6601038bde7a2cdfd8b5e8686b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 10:14:18 compute-0 systemd[1]: libpod-conmon-7166eda2605f524a73aca7cec566accc7c7d9b6601038bde7a2cdfd8b5e8686b.scope: Deactivated successfully.
Dec 05 10:14:18 compute-0 sudo[264085]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:18 compute-0 sudo[264228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:14:18 compute-0 sudo[264228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:18 compute-0 sudo[264228]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:18 compute-0 sudo[264253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:14:18 compute-0 sudo[264253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:18 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:19 compute-0 podman[264319]: 2025-12-05 10:14:19.194522032 +0000 UTC m=+0.048814289 container create 61c6d9b652ba8eb041cc57b79a75e9190e9409a95e6f81ba8f6b64d9d6d931b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:14:19 compute-0 systemd[1]: Started libpod-conmon-61c6d9b652ba8eb041cc57b79a75e9190e9409a95e6f81ba8f6b64d9d6d931b9.scope.
Dec 05 10:14:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:19.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:14:19 compute-0 podman[264319]: 2025-12-05 10:14:19.172679269 +0000 UTC m=+0.026971546 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:14:19 compute-0 podman[264319]: 2025-12-05 10:14:19.281051138 +0000 UTC m=+0.135343425 container init 61c6d9b652ba8eb041cc57b79a75e9190e9409a95e6f81ba8f6b64d9d6d931b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:14:19 compute-0 podman[264319]: 2025-12-05 10:14:19.288756298 +0000 UTC m=+0.143048555 container start 61c6d9b652ba8eb041cc57b79a75e9190e9409a95e6f81ba8f6b64d9d6d931b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swirles, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:14:19 compute-0 podman[264319]: 2025-12-05 10:14:19.291846592 +0000 UTC m=+0.146138929 container attach 61c6d9b652ba8eb041cc57b79a75e9190e9409a95e6f81ba8f6b64d9d6d931b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swirles, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:14:19 compute-0 compassionate_swirles[264335]: 167 167
Dec 05 10:14:19 compute-0 systemd[1]: libpod-61c6d9b652ba8eb041cc57b79a75e9190e9409a95e6f81ba8f6b64d9d6d931b9.scope: Deactivated successfully.
Dec 05 10:14:19 compute-0 podman[264319]: 2025-12-05 10:14:19.296789817 +0000 UTC m=+0.151082084 container died 61c6d9b652ba8eb041cc57b79a75e9190e9409a95e6f81ba8f6b64d9d6d931b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swirles, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:14:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:19.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d0fe7c3fc9e387e48b164b7df8e303b1ba2a08befb21b050d10afbeea3ca985-merged.mount: Deactivated successfully.
Dec 05 10:14:19 compute-0 podman[264319]: 2025-12-05 10:14:19.333763443 +0000 UTC m=+0.188055700 container remove 61c6d9b652ba8eb041cc57b79a75e9190e9409a95e6f81ba8f6b64d9d6d931b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swirles, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 10:14:19 compute-0 systemd[1]: libpod-conmon-61c6d9b652ba8eb041cc57b79a75e9190e9409a95e6f81ba8f6b64d9d6d931b9.scope: Deactivated successfully.
Dec 05 10:14:19 compute-0 podman[264361]: 2025-12-05 10:14:19.543836831 +0000 UTC m=+0.059127940 container create 8c013648b8638f5a4d70ed6a97082e9448f432087c9ffcaf8e00cf0c049c1e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:14:19 compute-0 systemd[1]: Started libpod-conmon-8c013648b8638f5a4d70ed6a97082e9448f432087c9ffcaf8e00cf0c049c1e60.scope.
Dec 05 10:14:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:14:19 compute-0 podman[264361]: 2025-12-05 10:14:19.525871983 +0000 UTC m=+0.041163112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e6cf2e226358e47e1940116cc0a32bbe905740cbdcbef756159553ffe7978/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e6cf2e226358e47e1940116cc0a32bbe905740cbdcbef756159553ffe7978/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e6cf2e226358e47e1940116cc0a32bbe905740cbdcbef756159553ffe7978/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e6cf2e226358e47e1940116cc0a32bbe905740cbdcbef756159553ffe7978/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:19 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 198 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 05 10:14:19 compute-0 podman[264361]: 2025-12-05 10:14:19.725368363 +0000 UTC m=+0.240659482 container init 8c013648b8638f5a4d70ed6a97082e9448f432087c9ffcaf8e00cf0c049c1e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 10:14:19 compute-0 podman[264361]: 2025-12-05 10:14:19.738100749 +0000 UTC m=+0.253391898 container start 8c013648b8638f5a4d70ed6a97082e9448f432087c9ffcaf8e00cf0c049c1e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 05 10:14:19 compute-0 podman[264361]: 2025-12-05 10:14:19.742800497 +0000 UTC m=+0.258091596 container attach 8c013648b8638f5a4d70ed6a97082e9448f432087c9ffcaf8e00cf0c049c1e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 05 10:14:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:20 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:20 compute-0 lvm[264452]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:14:20 compute-0 lvm[264452]: VG ceph_vg0 finished
Dec 05 10:14:20 compute-0 hungry_taussig[264377]: {}
Dec 05 10:14:20 compute-0 systemd[1]: libpod-8c013648b8638f5a4d70ed6a97082e9448f432087c9ffcaf8e00cf0c049c1e60.scope: Deactivated successfully.
Dec 05 10:14:20 compute-0 systemd[1]: libpod-8c013648b8638f5a4d70ed6a97082e9448f432087c9ffcaf8e00cf0c049c1e60.scope: Consumed 1.032s CPU time.
Dec 05 10:14:20 compute-0 podman[264361]: 2025-12-05 10:14:20.42659181 +0000 UTC m=+0.941882909 container died 8c013648b8638f5a4d70ed6a97082e9448f432087c9ffcaf8e00cf0c049c1e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:14:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a84e6cf2e226358e47e1940116cc0a32bbe905740cbdcbef756159553ffe7978-merged.mount: Deactivated successfully.
Dec 05 10:14:20 compute-0 podman[264361]: 2025-12-05 10:14:20.469038756 +0000 UTC m=+0.984329855 container remove 8c013648b8638f5a4d70ed6a97082e9448f432087c9ffcaf8e00cf0c049c1e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:14:20 compute-0 systemd[1]: libpod-conmon-8c013648b8638f5a4d70ed6a97082e9448f432087c9ffcaf8e00cf0c049c1e60.scope: Deactivated successfully.
Dec 05 10:14:20 compute-0 sudo[264253]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:14:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:14:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:14:20.572 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:14:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:14:20.572 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:14:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:14:20.573 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:14:20 compute-0 sudo[264468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:14:20 compute-0 sudo[264468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:20 compute-0 sudo[264468]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:20 compute-0 ceph-mon[74418]: pgmap v755: 353 pgs: 353 active+clean; 198 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 05 10:14:20 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:20 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:14:20 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:20 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:21.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:21.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:21 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:21 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 198 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:14:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:22 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101422 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:14:22 compute-0 ceph-mon[74418]: pgmap v756: 353 pgs: 353 active+clean; 198 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:14:22 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:22 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:23.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:23.644Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:14:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:23 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18140025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 200 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:14:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:24 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:24 compute-0 ceph-mon[74418]: pgmap v757: 353 pgs: 353 active+clean; 200 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:14:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:24 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:25.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:25] "GET /metrics HTTP/1.1" 200 48544 "" "Prometheus/2.51.0"
Dec 05 10:14:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:25] "GET /metrics HTTP/1.1" 200 48544 "" "Prometheus/2.51.0"
Dec 05 10:14:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:25 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 05 10:14:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:26 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1804000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:26 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:26 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17ec000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:26 compute-0 ceph-mon[74418]: pgmap v758: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 05 10:14:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:27.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:27.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:27.350Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:14:27
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root', 'vms', '.mgr', '.nfs', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:14:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:14:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:14:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:27 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:14:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001517210697890213 of space, bias 1.0, pg target 0.4551632093670639 quantized to 32 (current 32)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:14:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:14:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:28 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:28 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1804001aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:28 compute-0 ceph-mon[74418]: pgmap v759: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:14:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:29.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:29 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17ec0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 121 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Dec 05 10:14:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:30 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:30 compute-0 ceph-mon[74418]: pgmap v760: 353 pgs: 353 active+clean; 121 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Dec 05 10:14:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:30 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:31.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:31.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:31 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 121 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 16 KiB/s wr, 20 op/s
Dec 05 10:14:32 compute-0 sudo[264506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:14:32 compute-0 sudo[264506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:32 compute-0 sudo[264506]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:32 compute-0 ceph-mon[74418]: pgmap v761: 353 pgs: 353 active+clean; 121 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 16 KiB/s wr, 20 op/s
Dec 05 10:14:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:32 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17ec0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:32 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:33 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/357325283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:14:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:33.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:33.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:33.645Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:14:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:33 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 17 KiB/s wr, 23 op/s
Dec 05 10:14:34 compute-0 ceph-mon[74418]: pgmap v762: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 17 KiB/s wr, 23 op/s
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.170678) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929674171094, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 887, "num_deletes": 251, "total_data_size": 1474097, "memory_usage": 1504320, "flush_reason": "Manual Compaction"}
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec 05 10:14:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:34 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1804001aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929674193743, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1433586, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24010, "largest_seqno": 24896, "table_properties": {"data_size": 1429182, "index_size": 2056, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9917, "raw_average_key_size": 19, "raw_value_size": 1420271, "raw_average_value_size": 2823, "num_data_blocks": 90, "num_entries": 503, "num_filter_entries": 503, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929604, "oldest_key_time": 1764929604, "file_creation_time": 1764929674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 23161 microseconds, and 11786 cpu microseconds.
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.193835) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1433586 bytes OK
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.193903) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.196434) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.196449) EVENT_LOG_v1 {"time_micros": 1764929674196444, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.196479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1469833, prev total WAL file size 1469833, number of live WAL files 2.
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.197144) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1399KB)], [53(13MB)]
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929674197268, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 15357755, "oldest_snapshot_seqno": -1}
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5520 keys, 13074722 bytes, temperature: kUnknown
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929674328537, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 13074722, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13037819, "index_size": 22009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 141954, "raw_average_key_size": 25, "raw_value_size": 12937639, "raw_average_value_size": 2343, "num_data_blocks": 890, "num_entries": 5520, "num_filter_entries": 5520, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764929674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.328878) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 13074722 bytes
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.332823) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.9 rd, 99.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 13.3 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(19.8) write-amplify(9.1) OK, records in: 6036, records dropped: 516 output_compression: NoCompression
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.332840) EVENT_LOG_v1 {"time_micros": 1764929674332832, "job": 28, "event": "compaction_finished", "compaction_time_micros": 131412, "compaction_time_cpu_micros": 49916, "output_level": 6, "num_output_files": 1, "total_output_size": 13074722, "num_input_records": 6036, "num_output_records": 5520, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929674333293, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929674335809, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.197058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.336043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.336053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.336062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.336065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:34 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:14:34.336067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:34 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17ec0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:35.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:35.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:35] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Dec 05 10:14:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:35] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Dec 05 10:14:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:35 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 16 KiB/s wr, 29 op/s
Dec 05 10:14:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:36 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:36 compute-0 ceph-mon[74418]: pgmap v763: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 16 KiB/s wr, 29 op/s
Dec 05 10:14:36 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:36 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17ec002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:37.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:37.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:37.351Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:14:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:37 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18040027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v764: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 05 10:14:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:38 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:38 compute-0 ceph-mon[74418]: pgmap v764: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 05 10:14:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:38 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:39.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:39.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:39 compute-0 podman[264539]: 2025-12-05 10:14:39.416461012 +0000 UTC m=+0.070409977 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 05 10:14:39 compute-0 podman[264540]: 2025-12-05 10:14:39.425819857 +0000 UTC m=+0.080686498 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 05 10:14:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:39 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v765: 353 pgs: 353 active+clean; 41 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 24 KiB/s wr, 57 op/s
Dec 05 10:14:40 compute-0 ceph-mon[74418]: pgmap v765: 353 pgs: 353 active+clean; 41 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 24 KiB/s wr, 57 op/s
Dec 05 10:14:40 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/556869914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:14:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:40 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18040027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:40 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:40 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:41.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:14:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:41.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:14:41 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:41 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17ec002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 41 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 11 KiB/s wr, 38 op/s
Dec 05 10:14:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:42 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:14:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:14:42 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:42 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f18040027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:42 compute-0 ceph-mon[74418]: pgmap v766: 353 pgs: 353 active+clean; 41 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 11 KiB/s wr, 38 op/s
Dec 05 10:14:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:14:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:43.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:43.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:43 compute-0 podman[264581]: 2025-12-05 10:14:43.440593201 +0000 UTC m=+0.110531769 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 05 10:14:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:43.646Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:14:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:43.646Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:14:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:43 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v767: 353 pgs: 353 active+clean; 41 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 11 KiB/s wr, 38 op/s
Dec 05 10:14:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:44 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17ec003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:44 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:45 compute-0 ceph-mon[74418]: pgmap v767: 353 pgs: 353 active+clean; 41 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 11 KiB/s wr, 38 op/s
Dec 05 10:14:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:45.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:45.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:45] "GET /metrics HTTP/1.1" 200 48523 "" "Prometheus/2.51.0"
Dec 05 10:14:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:45] "GET /metrics HTTP/1.1" 200 48523 "" "Prometheus/2.51.0"
Dec 05 10:14:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:45 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1804003c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 11 KiB/s wr, 36 op/s
Dec 05 10:14:46 compute-0 ceph-mon[74418]: pgmap v768: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 11 KiB/s wr, 36 op/s
Dec 05 10:14:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:46 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:46 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17ec003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:47.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:47.352Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:14:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:47.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:47 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v769: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 9.5 KiB/s wr, 28 op/s
Dec 05 10:14:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:48 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1804003c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:48 compute-0 ceph-mon[74418]: pgmap v769: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 9.5 KiB/s wr, 28 op/s
Dec 05 10:14:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:48 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:49.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:49.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:49 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17ec004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v770: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 9.5 KiB/s wr, 28 op/s
Dec 05 10:14:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:50 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:50 compute-0 ceph-mon[74418]: pgmap v770: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 9.5 KiB/s wr, 28 op/s
Dec 05 10:14:50 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:50 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1804003c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:51.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:14:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:51.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:14:51 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:51 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:14:52 compute-0 sudo[264615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:14:52 compute-0 sudo[264615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:14:52 compute-0 sudo[264615]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17ec004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:52 compute-0 ceph-mon[74418]: pgmap v771: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:14:52 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:52 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:53.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:53.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:53.648Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:14:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:53 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1804003c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:14:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:54 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:54 compute-0 ceph-mon[74418]: pgmap v772: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:14:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:54 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17ec004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:55 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:14:55.187 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:14:55 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:14:55.189 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:14:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:14:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:55.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:14:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:55.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:55 compute-0 nova_compute[257087]: 2025-12-05 10:14:55.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:55] "GET /metrics HTTP/1.1" 200 48523 "" "Prometheus/2.51.0"
Dec 05 10:14:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:14:55] "GET /metrics HTTP/1.1" 200 48523 "" "Prometheus/2.51.0"
Dec 05 10:14:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:55 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:14:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:56 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1804003c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:56 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:56 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:57.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:14:57.359Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:14:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:57.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:57 compute-0 ceph-mon[74418]: pgmap v773: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:14:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/444267427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:14:57 compute-0 nova_compute[257087]: 2025-12-05 10:14:57.525 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:57 compute-0 nova_compute[257087]: 2025-12-05 10:14:57.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:57 compute-0 nova_compute[257087]: 2025-12-05 10:14:57.528 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:14:57 compute-0 nova_compute[257087]: 2025-12-05 10:14:57.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:14:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:14:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:14:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:14:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:14:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:14:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:14:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:14:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:14:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:57 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17e4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v774: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:14:57 compute-0 nova_compute[257087]: 2025-12-05 10:14:57.854 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:14:57 compute-0 nova_compute[257087]: 2025-12-05 10:14:57.855 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:58 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:58 compute-0 nova_compute[257087]: 2025-12-05 10:14:58.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:58 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1814001d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:59 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3891287327' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:14:59 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3891287327' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:14:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:14:59 compute-0 ceph-mon[74418]: pgmap v774: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:14:59 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1039993983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:14:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:14:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:14:59.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:14:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:14:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:14:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:14:59.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:14:59 compute-0 nova_compute[257087]: 2025-12-05 10:14:59.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:59 compute-0 nova_compute[257087]: 2025-12-05 10:14:59.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:14:59 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1814001d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:14:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:00 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:15:00.192 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:15:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:00 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:00 compute-0 nova_compute[257087]: 2025-12-05 10:15:00.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:00 compute-0 nova_compute[257087]: 2025-12-05 10:15:00.528 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:15:00 compute-0 nova_compute[257087]: 2025-12-05 10:15:00.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:00 compute-0 nova_compute[257087]: 2025-12-05 10:15:00.788 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:15:00 compute-0 nova_compute[257087]: 2025-12-05 10:15:00.788 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:15:00 compute-0 nova_compute[257087]: 2025-12-05 10:15:00.789 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:15:00 compute-0 nova_compute[257087]: 2025-12-05 10:15:00.789 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:15:00 compute-0 nova_compute[257087]: 2025-12-05 10:15:00.790 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:15:00 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:00 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:01.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:15:01 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2426710636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:15:01 compute-0 ceph-mon[74418]: pgmap v775: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:01 compute-0 nova_compute[257087]: 2025-12-05 10:15:01.532 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.742s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:15:01 compute-0 nova_compute[257087]: 2025-12-05 10:15:01.710 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:15:01 compute-0 nova_compute[257087]: 2025-12-05 10:15:01.712 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4882MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:15:01 compute-0 nova_compute[257087]: 2025-12-05 10:15:01.712 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:15:01 compute-0 nova_compute[257087]: 2025-12-05 10:15:01.712 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:15:01 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:01 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v776: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:01 compute-0 nova_compute[257087]: 2025-12-05 10:15:01.798 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:15:01 compute-0 nova_compute[257087]: 2025-12-05 10:15:01.799 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:15:01 compute-0 nova_compute[257087]: 2025-12-05 10:15:01.845 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:15:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:02 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1814002680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:15:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2428490770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:15:02 compute-0 nova_compute[257087]: 2025-12-05 10:15:02.336 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:15:02 compute-0 nova_compute[257087]: 2025-12-05 10:15:02.342 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:15:02 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:02 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2469727057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:15:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2426710636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:15:03 compute-0 ceph-mon[74418]: pgmap v776: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1676802347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:15:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2428490770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:15:03 compute-0 nova_compute[257087]: 2025-12-05 10:15:03.062 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:15:03 compute-0 nova_compute[257087]: 2025-12-05 10:15:03.064 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:15:03 compute-0 nova_compute[257087]: 2025-12-05 10:15:03.064 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:15:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:03.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:03.650Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:03 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v777: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:04 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:04 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1814002680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:05 compute-0 ceph-mon[74418]: pgmap v777: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:05.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:05] "GET /metrics HTTP/1.1" 200 48520 "" "Prometheus/2.51.0"
Dec 05 10:15:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:05] "GET /metrics HTTP/1.1" 200 48520 "" "Prometheus/2.51.0"
Dec 05 10:15:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:05 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:15:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:06 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:06 compute-0 ceph-mon[74418]: pgmap v778: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:15:06 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:06 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:07.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:07.361Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:07.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:07 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1814002680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v779: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:08 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17e4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:08 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:08 compute-0 ceph-mon[74418]: pgmap v779: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:09.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:09.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:09 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:10 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1814003b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:10 compute-0 ceph-mon[74418]: pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:10 compute-0 podman[264705]: 2025-12-05 10:15:10.404991467 +0000 UTC m=+0.061179474 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 05 10:15:10 compute-0 podman[264706]: 2025-12-05 10:15:10.411762621 +0000 UTC m=+0.062873610 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:15:10 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:10 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17e4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:11 compute-0 rsyslogd[1004]: imjournal: 2902 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 05 10:15:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:11.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:11 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:11 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:12 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:12 compute-0 sudo[264743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:15:12 compute-0 sudo[264743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:12 compute-0 sudo[264743]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:15:12 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1053704069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:15:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:15:12 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:12 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1814003b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:13.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:13.651Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:13 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1814003b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v782: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:14 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f181c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:14 compute-0 podman[264770]: 2025-12-05 10:15:14.429496223 +0000 UTC m=+0.094059926 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:15:14 compute-0 ceph-mon[74418]: pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:15:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:14 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:15.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:15 compute-0 ceph-mon[74418]: pgmap v782: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:15] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec 05 10:15:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:15] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec 05 10:15:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:15 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:15:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:16 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:16 compute-0 ceph-mon[74418]: pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 10:15:16 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:16 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 05 10:15:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:17.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:17.362Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:17.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:17 compute-0 kernel: ganesha.nfsd[262882]: segfault at 50 ip 00007f18c675532e sp 00007f187effc210 error 4 in libntirpc.so.5.8[7f18c673a000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 05 10:15:17 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 05 10:15:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[262762]: 05/12/2025 10:15:17 : epoch 6932b054 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f17f8003d50 fd 38 proxy ignored for local
Dec 05 10:15:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:17 compute-0 systemd[1]: Started Process Core Dump (PID 264799/UID 0).
Dec 05 10:15:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:18 compute-0 ceph-mon[74418]: pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 05 10:15:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2969671835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:15:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2168545864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:15:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:19.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:19.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 10:15:19 compute-0 systemd-coredump[264800]: Process 262766 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f18c675532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 05 10:15:20 compute-0 systemd[1]: systemd-coredump@6-264799-0.service: Deactivated successfully.
Dec 05 10:15:20 compute-0 systemd[1]: systemd-coredump@6-264799-0.service: Consumed 1.939s CPU time.
Dec 05 10:15:20 compute-0 podman[264807]: 2025-12-05 10:15:20.084890214 +0000 UTC m=+0.025922075 container died f23716e3400f7b56d94742c9cddefa386baa016571b977e3b310f244f93705bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 10:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d215c261b34711275969e3ef0acfb602abd6966547f48400d06888bbe5422d37-merged.mount: Deactivated successfully.
Dec 05 10:15:20 compute-0 podman[264807]: 2025-12-05 10:15:20.258129933 +0000 UTC m=+0.199161824 container remove f23716e3400f7b56d94742c9cddefa386baa016571b977e3b310f244f93705bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 10:15:20 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Main process exited, code=exited, status=139/n/a
Dec 05 10:15:20 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Failed with result 'exit-code'.
Dec 05 10:15:20 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 2.387s CPU time.
Dec 05 10:15:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:15:20.572 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:15:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:15:20.573 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:15:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:15:20.574 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:15:20 compute-0 sudo[264851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:15:20 compute-0 sudo[264851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:20 compute-0 sudo[264851]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:21 compute-0 sudo[264876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 05 10:15:21 compute-0 sudo[264876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:21.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:21.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:21 compute-0 sudo[264876]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 10:15:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:15:21 compute-0 ceph-mon[74418]: pgmap v785: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 10:15:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 10:15:22 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 10:15:22 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:15:22 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:22 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:22 compute-0 sudo[264923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:15:22 compute-0 sudo[264923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:22 compute-0 sudo[264923]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:22 compute-0 sudo[264949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:15:22 compute-0 sudo[264949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:22 compute-0 ceph-mon[74418]: pgmap v786: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 10:15:22 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:22 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:22 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:22 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:22 compute-0 sudo[264949]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:15:22 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:15:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:15:22 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:15:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:15:22 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:15:22 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:15:22 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:15:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:15:22 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:15:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:15:22 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:15:22 compute-0 sudo[265008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:15:22 compute-0 sudo[265008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:22 compute-0 sudo[265008]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:22 compute-0 sudo[265033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:15:22 compute-0 sudo[265033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:23.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:23.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:23 compute-0 podman[265098]: 2025-12-05 10:15:23.512639633 +0000 UTC m=+0.100497492 container create 4e3e1b60b8ff9f54911d1c5faabeba86a433fb68de3132138b62393d6df420da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_solomon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:15:23 compute-0 podman[265098]: 2025-12-05 10:15:23.44998442 +0000 UTC m=+0.037842289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:15:23 compute-0 systemd[1]: Started libpod-conmon-4e3e1b60b8ff9f54911d1c5faabeba86a433fb68de3132138b62393d6df420da.scope.
Dec 05 10:15:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:15:23 compute-0 podman[265098]: 2025-12-05 10:15:23.605717143 +0000 UTC m=+0.193575052 container init 4e3e1b60b8ff9f54911d1c5faabeba86a433fb68de3132138b62393d6df420da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 10:15:23 compute-0 podman[265098]: 2025-12-05 10:15:23.616421724 +0000 UTC m=+0.204279543 container start 4e3e1b60b8ff9f54911d1c5faabeba86a433fb68de3132138b62393d6df420da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:15:23 compute-0 podman[265098]: 2025-12-05 10:15:23.620050902 +0000 UTC m=+0.207908771 container attach 4e3e1b60b8ff9f54911d1c5faabeba86a433fb68de3132138b62393d6df420da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_solomon, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:15:23 compute-0 systemd[1]: libpod-4e3e1b60b8ff9f54911d1c5faabeba86a433fb68de3132138b62393d6df420da.scope: Deactivated successfully.
Dec 05 10:15:23 compute-0 sad_solomon[265115]: 167 167
Dec 05 10:15:23 compute-0 conmon[265115]: conmon 4e3e1b60b8ff9f54911d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e3e1b60b8ff9f54911d1c5faabeba86a433fb68de3132138b62393d6df420da.scope/container/memory.events
Dec 05 10:15:23 compute-0 podman[265098]: 2025-12-05 10:15:23.627105784 +0000 UTC m=+0.214963663 container died 4e3e1b60b8ff9f54911d1c5faabeba86a433fb68de3132138b62393d6df420da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_solomon, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 10:15:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:23.653Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-22c64ae49532a92600f13e4841d9bcd062fba3028b6764cdb609e7bb5bc859ce-merged.mount: Deactivated successfully.
Dec 05 10:15:23 compute-0 podman[265098]: 2025-12-05 10:15:23.670104833 +0000 UTC m=+0.257962672 container remove 4e3e1b60b8ff9f54911d1c5faabeba86a433fb68de3132138b62393d6df420da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_solomon, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:15:23 compute-0 systemd[1]: libpod-conmon-4e3e1b60b8ff9f54911d1c5faabeba86a433fb68de3132138b62393d6df420da.scope: Deactivated successfully.
Dec 05 10:15:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 682 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Dec 05 10:15:23 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:15:23 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:15:23 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:23 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:23 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:15:23 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:15:23 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:15:23 compute-0 podman[265138]: 2025-12-05 10:15:23.868918556 +0000 UTC m=+0.066072567 container create 9bf60ba104e5f92465ff7a0e48e4dad72b3600deeb324469797095a9ede8913b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhaskara, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:15:23 compute-0 systemd[1]: Started libpod-conmon-9bf60ba104e5f92465ff7a0e48e4dad72b3600deeb324469797095a9ede8913b.scope.
Dec 05 10:15:23 compute-0 podman[265138]: 2025-12-05 10:15:23.839386293 +0000 UTC m=+0.036540414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:15:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37a66448db72ddd86185883d7dae090a487683a7738659f1b196bb180601480/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37a66448db72ddd86185883d7dae090a487683a7738659f1b196bb180601480/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37a66448db72ddd86185883d7dae090a487683a7738659f1b196bb180601480/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37a66448db72ddd86185883d7dae090a487683a7738659f1b196bb180601480/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37a66448db72ddd86185883d7dae090a487683a7738659f1b196bb180601480/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:23 compute-0 podman[265138]: 2025-12-05 10:15:23.975726019 +0000 UTC m=+0.172880030 container init 9bf60ba104e5f92465ff7a0e48e4dad72b3600deeb324469797095a9ede8913b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhaskara, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:15:23 compute-0 podman[265138]: 2025-12-05 10:15:23.987495098 +0000 UTC m=+0.184649109 container start 9bf60ba104e5f92465ff7a0e48e4dad72b3600deeb324469797095a9ede8913b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 05 10:15:23 compute-0 podman[265138]: 2025-12-05 10:15:23.991861007 +0000 UTC m=+0.189015028 container attach 9bf60ba104e5f92465ff7a0e48e4dad72b3600deeb324469797095a9ede8913b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhaskara, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 10:15:24 compute-0 interesting_bhaskara[265154]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:15:24 compute-0 interesting_bhaskara[265154]: --> All data devices are unavailable
Dec 05 10:15:24 compute-0 systemd[1]: libpod-9bf60ba104e5f92465ff7a0e48e4dad72b3600deeb324469797095a9ede8913b.scope: Deactivated successfully.
Dec 05 10:15:24 compute-0 podman[265138]: 2025-12-05 10:15:24.378498565 +0000 UTC m=+0.575652586 container died 9bf60ba104e5f92465ff7a0e48e4dad72b3600deeb324469797095a9ede8913b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:15:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d37a66448db72ddd86185883d7dae090a487683a7738659f1b196bb180601480-merged.mount: Deactivated successfully.
Dec 05 10:15:24 compute-0 podman[265138]: 2025-12-05 10:15:24.427033834 +0000 UTC m=+0.624187845 container remove 9bf60ba104e5f92465ff7a0e48e4dad72b3600deeb324469797095a9ede8913b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhaskara, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 10:15:24 compute-0 systemd[1]: libpod-conmon-9bf60ba104e5f92465ff7a0e48e4dad72b3600deeb324469797095a9ede8913b.scope: Deactivated successfully.
Dec 05 10:15:24 compute-0 sudo[265033]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:24 compute-0 sudo[265182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:15:24 compute-0 sudo[265182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:24 compute-0 sudo[265182]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:24 compute-0 sudo[265207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:15:24 compute-0 sudo[265207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:24 compute-0 ceph-mon[74418]: pgmap v787: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 682 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Dec 05 10:15:25 compute-0 podman[265273]: 2025-12-05 10:15:25.051540097 +0000 UTC m=+0.052330544 container create c5f343287a27bdebb95ff3117515a1bfd9abf9a16a064d2c853f63be454e95e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_neumann, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:15:25 compute-0 systemd[1]: Started libpod-conmon-c5f343287a27bdebb95ff3117515a1bfd9abf9a16a064d2c853f63be454e95e4.scope.
Dec 05 10:15:25 compute-0 podman[265273]: 2025-12-05 10:15:25.028036208 +0000 UTC m=+0.028826635 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:15:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:15:25 compute-0 podman[265273]: 2025-12-05 10:15:25.190827202 +0000 UTC m=+0.191617629 container init c5f343287a27bdebb95ff3117515a1bfd9abf9a16a064d2c853f63be454e95e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:15:25 compute-0 podman[265273]: 2025-12-05 10:15:25.20175268 +0000 UTC m=+0.202543087 container start c5f343287a27bdebb95ff3117515a1bfd9abf9a16a064d2c853f63be454e95e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_neumann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 10:15:25 compute-0 podman[265273]: 2025-12-05 10:15:25.204853734 +0000 UTC m=+0.205644201 container attach c5f343287a27bdebb95ff3117515a1bfd9abf9a16a064d2c853f63be454e95e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_neumann, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:15:25 compute-0 youthful_neumann[265290]: 167 167
Dec 05 10:15:25 compute-0 systemd[1]: libpod-c5f343287a27bdebb95ff3117515a1bfd9abf9a16a064d2c853f63be454e95e4.scope: Deactivated successfully.
Dec 05 10:15:25 compute-0 conmon[265290]: conmon c5f343287a27bdebb95f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5f343287a27bdebb95ff3117515a1bfd9abf9a16a064d2c853f63be454e95e4.scope/container/memory.events
Dec 05 10:15:25 compute-0 podman[265295]: 2025-12-05 10:15:25.272689847 +0000 UTC m=+0.042934617 container died c5f343287a27bdebb95ff3117515a1bfd9abf9a16a064d2c853f63be454e95e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_neumann, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 10:15:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8c226e375f8843afe2940d3624d212be6a986c5ba454578c0f266cc79f23fe4-merged.mount: Deactivated successfully.
Dec 05 10:15:25 compute-0 podman[265295]: 2025-12-05 10:15:25.31547733 +0000 UTC m=+0.085722090 container remove c5f343287a27bdebb95ff3117515a1bfd9abf9a16a064d2c853f63be454e95e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_neumann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 10:15:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:25.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:25 compute-0 systemd[1]: libpod-conmon-c5f343287a27bdebb95ff3117515a1bfd9abf9a16a064d2c853f63be454e95e4.scope: Deactivated successfully.
Dec 05 10:15:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:25.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:25 compute-0 podman[265318]: 2025-12-05 10:15:25.570990434 +0000 UTC m=+0.054623915 container create 61f8252762d9debdbd1c9712fdce23ea60295ffa5b54e738d333d75369136a37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_chandrasekhar, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 10:15:25 compute-0 systemd[1]: Started libpod-conmon-61f8252762d9debdbd1c9712fdce23ea60295ffa5b54e738d333d75369136a37.scope.
Dec 05 10:15:25 compute-0 podman[265318]: 2025-12-05 10:15:25.543777465 +0000 UTC m=+0.027411006 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:15:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:25] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec 05 10:15:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:25] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec 05 10:15:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302d32db0a562d8564c9e332eadec6b1ec159b01b96258b21ae47bf1f9ffa84a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302d32db0a562d8564c9e332eadec6b1ec159b01b96258b21ae47bf1f9ffa84a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302d32db0a562d8564c9e332eadec6b1ec159b01b96258b21ae47bf1f9ffa84a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302d32db0a562d8564c9e332eadec6b1ec159b01b96258b21ae47bf1f9ffa84a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:25 compute-0 podman[265318]: 2025-12-05 10:15:25.695605571 +0000 UTC m=+0.179239042 container init 61f8252762d9debdbd1c9712fdce23ea60295ffa5b54e738d333d75369136a37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:15:25 compute-0 podman[265318]: 2025-12-05 10:15:25.702376705 +0000 UTC m=+0.186010186 container start 61f8252762d9debdbd1c9712fdce23ea60295ffa5b54e738d333d75369136a37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_chandrasekhar, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:15:25 compute-0 podman[265318]: 2025-12-05 10:15:25.706528418 +0000 UTC m=+0.190161919 container attach 61f8252762d9debdbd1c9712fdce23ea60295ffa5b54e738d333d75369136a37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_chandrasekhar, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 10:15:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101525 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:15:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]: {
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:     "1": [
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:         {
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             "devices": [
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "/dev/loop3"
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             ],
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             "lv_name": "ceph_lv0",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             "lv_size": "21470642176",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             "name": "ceph_lv0",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             "tags": {
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.cluster_name": "ceph",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.crush_device_class": "",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.encrypted": "0",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.osd_id": "1",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.type": "block",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.vdo": "0",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:                 "ceph.with_tpm": "0"
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             },
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             "type": "block",
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:             "vg_name": "ceph_vg0"
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:         }
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]:     ]
Dec 05 10:15:26 compute-0 loving_chandrasekhar[265335]: }
Dec 05 10:15:26 compute-0 systemd[1]: libpod-61f8252762d9debdbd1c9712fdce23ea60295ffa5b54e738d333d75369136a37.scope: Deactivated successfully.
Dec 05 10:15:26 compute-0 podman[265318]: 2025-12-05 10:15:26.050646291 +0000 UTC m=+0.534279762 container died 61f8252762d9debdbd1c9712fdce23ea60295ffa5b54e738d333d75369136a37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_chandrasekhar, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 10:15:26 compute-0 ceph-mon[74418]: pgmap v788: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 05 10:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-302d32db0a562d8564c9e332eadec6b1ec159b01b96258b21ae47bf1f9ffa84a-merged.mount: Deactivated successfully.
Dec 05 10:15:26 compute-0 podman[265318]: 2025-12-05 10:15:26.521840896 +0000 UTC m=+1.005474347 container remove 61f8252762d9debdbd1c9712fdce23ea60295ffa5b54e738d333d75369136a37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:15:26 compute-0 systemd[1]: libpod-conmon-61f8252762d9debdbd1c9712fdce23ea60295ffa5b54e738d333d75369136a37.scope: Deactivated successfully.
Dec 05 10:15:26 compute-0 sudo[265207]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:26 compute-0 sudo[265360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:15:26 compute-0 sudo[265360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:26 compute-0 sudo[265360]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:26 compute-0 sudo[265385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:15:26 compute-0 sudo[265385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:27 compute-0 podman[265449]: 2025-12-05 10:15:27.299512841 +0000 UTC m=+0.108931131 container create 3a2a5c1f18f32f17d0d7b794d828e0931cec03b3772c21c11cf77a65c6468104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 05 10:15:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:27.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:27 compute-0 podman[265449]: 2025-12-05 10:15:27.235157253 +0000 UTC m=+0.044575533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:15:27 compute-0 systemd[1]: Started libpod-conmon-3a2a5c1f18f32f17d0d7b794d828e0931cec03b3772c21c11cf77a65c6468104.scope.
Dec 05 10:15:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:27.363Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:15:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:27.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:27 compute-0 podman[265449]: 2025-12-05 10:15:27.555947011 +0000 UTC m=+0.365365301 container init 3a2a5c1f18f32f17d0d7b794d828e0931cec03b3772c21c11cf77a65c6468104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:15:27 compute-0 podman[265449]: 2025-12-05 10:15:27.56439739 +0000 UTC m=+0.373815630 container start 3a2a5c1f18f32f17d0d7b794d828e0931cec03b3772c21c11cf77a65c6468104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 10:15:27 compute-0 suspicious_carson[265465]: 167 167
Dec 05 10:15:27 compute-0 systemd[1]: libpod-3a2a5c1f18f32f17d0d7b794d828e0931cec03b3772c21c11cf77a65c6468104.scope: Deactivated successfully.
Dec 05 10:15:27 compute-0 conmon[265465]: conmon 3a2a5c1f18f32f17d0d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a2a5c1f18f32f17d0d7b794d828e0931cec03b3772c21c11cf77a65c6468104.scope/container/memory.events
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:15:27
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'vms', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.nfs', 'default.rgw.log', 'images']
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:15:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:15:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 05 10:15:27 compute-0 podman[265449]: 2025-12-05 10:15:27.800582559 +0000 UTC m=+0.610000819 container attach 3a2a5c1f18f32f17d0d7b794d828e0931cec03b3772c21c11cf77a65c6468104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 10:15:27 compute-0 podman[265449]: 2025-12-05 10:15:27.801405362 +0000 UTC m=+0.610823622 container died 3a2a5c1f18f32f17d0d7b794d828e0931cec03b3772c21c11cf77a65c6468104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 10:15:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:15:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:15:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d88bd6d661dacb39db7ea8024a7e47fe62b506d8a3219dd40ab4f62301ad777b-merged.mount: Deactivated successfully.
Dec 05 10:15:28 compute-0 podman[265449]: 2025-12-05 10:15:28.429721138 +0000 UTC m=+1.239139378 container remove 3a2a5c1f18f32f17d0d7b794d828e0931cec03b3772c21c11cf77a65c6468104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 10:15:28 compute-0 systemd[1]: libpod-conmon-3a2a5c1f18f32f17d0d7b794d828e0931cec03b3772c21c11cf77a65c6468104.scope: Deactivated successfully.
Dec 05 10:15:28 compute-0 podman[265493]: 2025-12-05 10:15:28.594875117 +0000 UTC m=+0.023310305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:15:28 compute-0 podman[265493]: 2025-12-05 10:15:28.77455594 +0000 UTC m=+0.202991078 container create 44d9740bbd30e5591c590b436ad5ac9003d313823050a6d15105661d73583dab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_northcutt, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 10:15:28 compute-0 systemd[1]: Started libpod-conmon-44d9740bbd30e5591c590b436ad5ac9003d313823050a6d15105661d73583dab.scope.
Dec 05 10:15:28 compute-0 ceph-mon[74418]: pgmap v789: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 05 10:15:28 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96abe6904c0de8e67d27bd58804b3151245479b327a1a8116e1ca15f66085c23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96abe6904c0de8e67d27bd58804b3151245479b327a1a8116e1ca15f66085c23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96abe6904c0de8e67d27bd58804b3151245479b327a1a8116e1ca15f66085c23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96abe6904c0de8e67d27bd58804b3151245479b327a1a8116e1ca15f66085c23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:28 compute-0 podman[265493]: 2025-12-05 10:15:28.877559429 +0000 UTC m=+0.305994587 container init 44d9740bbd30e5591c590b436ad5ac9003d313823050a6d15105661d73583dab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_northcutt, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 10:15:28 compute-0 podman[265493]: 2025-12-05 10:15:28.889571006 +0000 UTC m=+0.318006144 container start 44d9740bbd30e5591c590b436ad5ac9003d313823050a6d15105661d73583dab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_northcutt, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:15:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:29.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:15:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:29.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:15:29 compute-0 podman[265493]: 2025-12-05 10:15:29.593299501 +0000 UTC m=+1.021734659 container attach 44d9740bbd30e5591c590b436ad5ac9003d313823050a6d15105661d73583dab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_northcutt, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 05 10:15:29 compute-0 lvm[265584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:15:29 compute-0 lvm[265584]: VG ceph_vg0 finished
Dec 05 10:15:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 05 10:15:29 compute-0 youthful_northcutt[265510]: {}
Dec 05 10:15:29 compute-0 systemd[1]: libpod-44d9740bbd30e5591c590b436ad5ac9003d313823050a6d15105661d73583dab.scope: Deactivated successfully.
Dec 05 10:15:29 compute-0 podman[265493]: 2025-12-05 10:15:29.79672076 +0000 UTC m=+1.225155908 container died 44d9740bbd30e5591c590b436ad5ac9003d313823050a6d15105661d73583dab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_northcutt, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 10:15:29 compute-0 systemd[1]: libpod-44d9740bbd30e5591c590b436ad5ac9003d313823050a6d15105661d73583dab.scope: Consumed 1.437s CPU time.
Dec 05 10:15:30 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Scheduled restart job, restart counter is at 7.
Dec 05 10:15:30 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 10:15:30 compute-0 systemd[1]: ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5@nfs.cephfs.2.0.compute-0.hocvro.service: Consumed 2.387s CPU time.
Dec 05 10:15:30 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5...
Dec 05 10:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-96abe6904c0de8e67d27bd58804b3151245479b327a1a8116e1ca15f66085c23-merged.mount: Deactivated successfully.
Dec 05 10:15:30 compute-0 ceph-mon[74418]: pgmap v790: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 05 10:15:30 compute-0 podman[265493]: 2025-12-05 10:15:30.887796283 +0000 UTC m=+2.316231431 container remove 44d9740bbd30e5591c590b436ad5ac9003d313823050a6d15105661d73583dab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:15:30 compute-0 sudo[265385]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:15:30 compute-0 systemd[1]: libpod-conmon-44d9740bbd30e5591c590b436ad5ac9003d313823050a6d15105661d73583dab.scope: Deactivated successfully.
Dec 05 10:15:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:15:31 compute-0 podman[265649]: 2025-12-05 10:15:31.157311118 +0000 UTC m=+0.026346037 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:15:31 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:31.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:31 compute-0 podman[265649]: 2025-12-05 10:15:31.350322453 +0000 UTC m=+0.219357372 container create 861f6a1b65dda022baecf3a1d543dbc6380dd0161a45bd75168d782fe13058a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 10:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1295d58284b4ad1e6341285ccce95dcc3f7bce6f8737d8b8dda19b998a37f100/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1295d58284b4ad1e6341285ccce95dcc3f7bce6f8737d8b8dda19b998a37f100/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1295d58284b4ad1e6341285ccce95dcc3f7bce6f8737d8b8dda19b998a37f100/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1295d58284b4ad1e6341285ccce95dcc3f7bce6f8737d8b8dda19b998a37f100/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hocvro-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:31.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:31 compute-0 sudo[265662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:15:31 compute-0 sudo[265662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:31 compute-0 sudo[265662]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 05 10:15:31 compute-0 podman[265649]: 2025-12-05 10:15:31.874420328 +0000 UTC m=+0.743455227 container init 861f6a1b65dda022baecf3a1d543dbc6380dd0161a45bd75168d782fe13058a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:15:31 compute-0 podman[265649]: 2025-12-05 10:15:31.879833015 +0000 UTC m=+0.748867904 container start 861f6a1b65dda022baecf3a1d543dbc6380dd0161a45bd75168d782fe13058a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 05 10:15:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:31 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 05 10:15:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:31 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 05 10:15:32 compute-0 sudo[265710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:15:32 compute-0 sudo[265710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:32 compute-0 sudo[265710]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:32 compute-0 bash[265649]: 861f6a1b65dda022baecf3a1d543dbc6380dd0161a45bd75168d782fe13058a4
Dec 05 10:15:32 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hocvro for 3c63ce0f-5206-59ae-8381-b67d0b6424b5.
Dec 05 10:15:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 05 10:15:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 05 10:15:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 05 10:15:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 05 10:15:32 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 05 10:15:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:33.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:33.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:33.655Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 05 10:15:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:15:34 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:34 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:15:34 compute-0 ceph-mon[74418]: pgmap v791: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 05 10:15:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:35.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:35.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:35 compute-0 ceph-mon[74418]: pgmap v792: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 05 10:15:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:35] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec 05 10:15:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:35] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec 05 10:15:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 58 op/s
Dec 05 10:15:37 compute-0 ceph-mon[74418]: pgmap v793: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 58 op/s
Dec 05 10:15:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:37.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:37.366Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:37.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:15:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.9 total, 600.0 interval
                                           Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2676 syncs, 3.83 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1133 writes, 3083 keys, 1133 commit groups, 1.0 writes per commit group, ingest: 2.58 MB, 0.00 MB/s
                                           Interval WAL: 1133 writes, 503 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:15:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 7 op/s
Dec 05 10:15:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:38 compute-0 ceph-mon[74418]: pgmap v794: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 7 op/s
Dec 05 10:15:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:39.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:39.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v795: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:15:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:39 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:15:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:39 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:15:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:39 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:15:40 compute-0 ceph-mon[74418]: pgmap v795: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:15:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:41.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:41 compute-0 podman[265766]: 2025-12-05 10:15:41.425577366 +0000 UTC m=+0.077920959 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 05 10:15:41 compute-0 podman[265767]: 2025-12-05 10:15:41.428941337 +0000 UTC m=+0.079701297 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 05 10:15:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:41.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v796: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:15:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:15:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:15:42 compute-0 ceph-mon[74418]: pgmap v796: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:15:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:15:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:43.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:43.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:43.656Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:15:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:15:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:15:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:15:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:44 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:15:45 compute-0 ceph-mon[74418]: pgmap v797: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:15:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:45.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:45 compute-0 podman[265812]: 2025-12-05 10:15:45.427322415 +0000 UTC m=+0.093493523 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 05 10:15:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:45.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:45] "GET /metrics HTTP/1.1" 200 48569 "" "Prometheus/2.51.0"
Dec 05 10:15:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:45] "GET /metrics HTTP/1.1" 200 48569 "" "Prometheus/2.51.0"
Dec 05 10:15:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v798: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 05 10:15:47 compute-0 ceph-mon[74418]: pgmap v798: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 05 10:15:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:47.366Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:47.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:47.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec 05 10:15:48 compute-0 ceph-mon[74418]: pgmap v799: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec 05 10:15:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:15:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:15:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:15:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:49 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:15:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:49.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:49.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec 05 10:15:50 compute-0 ceph-mon[74418]: pgmap v800: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec 05 10:15:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:51.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:51.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 14 KiB/s wr, 1 op/s
Dec 05 10:15:52 compute-0 sudo[265845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:15:52 compute-0 sudo[265845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:15:52 compute-0 sudo[265845]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:52 compute-0 ceph-mon[74418]: pgmap v801: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 14 KiB/s wr, 1 op/s
Dec 05 10:15:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:53.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:53.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:53.673Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 14 KiB/s wr, 1 op/s
Dec 05 10:15:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:15:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:15:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:15:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:54 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:15:54 compute-0 ceph-mon[74418]: pgmap v802: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 14 KiB/s wr, 1 op/s
Dec 05 10:15:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:15:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:55.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:15:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:55.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:55] "GET /metrics HTTP/1.1" 200 48569 "" "Prometheus/2.51.0"
Dec 05 10:15:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:15:55] "GET /metrics HTTP/1.1" 200 48569 "" "Prometheus/2.51.0"
Dec 05 10:15:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 14 KiB/s wr, 1 op/s
Dec 05 10:15:56 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:15:56.200 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:15:56 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:15:56.203 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:15:56 compute-0 ceph-mon[74418]: pgmap v803: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 14 KiB/s wr, 1 op/s
Dec 05 10:15:57 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:15:57.206 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:15:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:15:57.368Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:15:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:57.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:15:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:57.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:15:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:15:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:15:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:15:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:15:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:15:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:15:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:15:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:15:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Dec 05 10:15:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/318447530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:15:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/318447530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:15:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:15:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:58 compute-0 ceph-mon[74418]: pgmap v804: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Dec 05 10:15:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2127105420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:15:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:15:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:15:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:15:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:15:59 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:15:59 compute-0 nova_compute[257087]: 2025-12-05 10:15:59.062 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:59 compute-0 nova_compute[257087]: 2025-12-05 10:15:59.064 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:59 compute-0 nova_compute[257087]: 2025-12-05 10:15:59.086 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:59 compute-0 nova_compute[257087]: 2025-12-05 10:15:59.087 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:15:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:15:59.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:15:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:15:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:15:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:15:59.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:15:59 compute-0 nova_compute[257087]: 2025-12-05 10:15:59.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:59 compute-0 nova_compute[257087]: 2025-12-05 10:15:59.531 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:15:59 compute-0 nova_compute[257087]: 2025-12-05 10:15:59.532 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:15:59 compute-0 nova_compute[257087]: 2025-12-05 10:15:59.557 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:15:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.2 KiB/s wr, 29 op/s
Dec 05 10:15:59 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3697342831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:16:00 compute-0 nova_compute[257087]: 2025-12-05 10:16:00.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:00 compute-0 nova_compute[257087]: 2025-12-05 10:16:00.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:00 compute-0 nova_compute[257087]: 2025-12-05 10:16:00.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:00 compute-0 ceph-mon[74418]: pgmap v805: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.2 KiB/s wr, 29 op/s
Dec 05 10:16:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2089157654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:16:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:16:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:01.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:16:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:01.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:01 compute-0 nova_compute[257087]: 2025-12-05 10:16:01.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:01 compute-0 nova_compute[257087]: 2025-12-05 10:16:01.553 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:16:01 compute-0 nova_compute[257087]: 2025-12-05 10:16:01.554 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:16:01 compute-0 nova_compute[257087]: 2025-12-05 10:16:01.555 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:16:01 compute-0 nova_compute[257087]: 2025-12-05 10:16:01.555 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:16:01 compute-0 nova_compute[257087]: 2025-12-05 10:16:01.557 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:16:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Dec 05 10:16:01 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2994170516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:16:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:16:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687311170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.109 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.291 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.292 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4937MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.292 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.293 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.372 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.373 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.389 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:16:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:16:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3460985400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.858 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.864 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.900 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.901 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:16:02 compute-0 nova_compute[257087]: 2025-12-05 10:16:02.901 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:16:02 compute-0 ceph-mon[74418]: pgmap v806: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Dec 05 10:16:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/244829315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:16:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/687311170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:16:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3460985400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:16:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:03.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:16:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:03.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:16:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:03.674Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:16:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:03.674Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:16:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Dec 05 10:16:03 compute-0 sshd-session[265925]: Connection closed by 43.225.159.82 port 54772
Dec 05 10:16:03 compute-0 nova_compute[257087]: 2025-12-05 10:16:03.902 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:03 compute-0 nova_compute[257087]: 2025-12-05 10:16:03.902 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:16:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:04 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:04 compute-0 ceph-mon[74418]: pgmap v807: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Dec 05 10:16:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:05.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:05.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:05] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec 05 10:16:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:05] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec 05 10:16:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec 05 10:16:06 compute-0 ceph-mon[74418]: pgmap v808: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec 05 10:16:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:07.368Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:16:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:07.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:07.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Dec 05 10:16:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:09 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:09 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:09 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:09 compute-0 ceph-mon[74418]: pgmap v809: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Dec 05 10:16:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:09.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:09.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Dec 05 10:16:10 compute-0 ceph-mon[74418]: pgmap v810: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Dec 05 10:16:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:11.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:11.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:12 compute-0 podman[265935]: 2025-12-05 10:16:12.414215517 +0000 UTC m=+0.064014411 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:16:12 compute-0 podman[265936]: 2025-12-05 10:16:12.444927612 +0000 UTC m=+0.096744161 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:16:12 compute-0 sudo[265976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:16:12 compute-0 sudo[265976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:12 compute-0 sudo[265976]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:16:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:16:12 compute-0 ceph-mon[74418]: pgmap v811: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:16:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:13.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:13.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:13.675Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:16:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:14 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:14 compute-0 ceph-mon[74418]: pgmap v812: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:16:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:15.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:16:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:15.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:15] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:16:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:15] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:16:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:16:16 compute-0 podman[266004]: 2025-12-05 10:16:16.476293515 +0000 UTC m=+0.141509588 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:16:16 compute-0 ceph-mon[74418]: pgmap v813: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:16:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:17.370Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:16:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:17.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:17.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:18 compute-0 ceph-mon[74418]: pgmap v814: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:19 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:19.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Dec 05 10:16:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:16:20.573 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:16:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:16:20.574 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:16:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:16:20.574 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:16:20 compute-0 ceph-mon[74418]: pgmap v815: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Dec 05 10:16:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:21.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:21.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:23 compute-0 ceph-mon[74418]: pgmap v816: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:23.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:23.676Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:16:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:24 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:24 compute-0 ceph-mon[74418]: pgmap v817: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:25 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2734247840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:16:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:25.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:25.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:25] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:16:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:25] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:16:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:16:26 compute-0 ceph-mon[74418]: pgmap v818: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:16:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:27.371Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:16:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:27.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:27.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:16:27
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', '.mgr', '.rgw.root', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.control', '.nfs', 'default.rgw.meta', 'images', 'cephfs.cephfs.data']
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:16:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:16:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:16:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:16:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:16:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:28 compute-0 ceph-mon[74418]: pgmap v819: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:16:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:29 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:29.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:29.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 10:16:30 compute-0 ceph-mon[74418]: pgmap v820: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 10:16:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:31.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:31.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:31 compute-0 sudo[266046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:16:31 compute-0 sudo[266046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:31 compute-0 sudo[266046]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:31 compute-0 sudo[266071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:16:31 compute-0 sudo[266071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 10:16:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 10:16:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 10:16:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:32 compute-0 sudo[266071]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:32 compute-0 sudo[266131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:16:32 compute-0 sudo[266131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:32 compute-0 sudo[266131]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:32 compute-0 sudo[266156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- inventory --format=json-pretty --filter-for-batch
Dec 05 10:16:32 compute-0 sudo[266156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:32 compute-0 sudo[266160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:16:32 compute-0 sudo[266160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:32 compute-0 sudo[266160]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:32 compute-0 podman[266250]: 2025-12-05 10:16:32.960826526 +0000 UTC m=+0.045065105 container create e0722a4a57e129163cd6662ed94b684be8c73f5e59081b4c72a61a3dd922b7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rubin, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 10:16:32 compute-0 systemd[1]: Started libpod-conmon-e0722a4a57e129163cd6662ed94b684be8c73f5e59081b4c72a61a3dd922b7e3.scope.
Dec 05 10:16:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:16:33 compute-0 podman[266250]: 2025-12-05 10:16:32.9407145 +0000 UTC m=+0.024953129 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:16:33 compute-0 ceph-mon[74418]: pgmap v821: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 10:16:33 compute-0 podman[266250]: 2025-12-05 10:16:33.106971519 +0000 UTC m=+0.191210088 container init e0722a4a57e129163cd6662ed94b684be8c73f5e59081b4c72a61a3dd922b7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:16:33 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:33 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:33 compute-0 podman[266250]: 2025-12-05 10:16:33.114335898 +0000 UTC m=+0.198574467 container start e0722a4a57e129163cd6662ed94b684be8c73f5e59081b4c72a61a3dd922b7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:16:33 compute-0 amazing_rubin[266266]: 167 167
Dec 05 10:16:33 compute-0 systemd[1]: libpod-e0722a4a57e129163cd6662ed94b684be8c73f5e59081b4c72a61a3dd922b7e3.scope: Deactivated successfully.
Dec 05 10:16:33 compute-0 conmon[266266]: conmon e0722a4a57e129163cd6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0722a4a57e129163cd6662ed94b684be8c73f5e59081b4c72a61a3dd922b7e3.scope/container/memory.events
Dec 05 10:16:33 compute-0 podman[266250]: 2025-12-05 10:16:33.135510754 +0000 UTC m=+0.219749333 container attach e0722a4a57e129163cd6662ed94b684be8c73f5e59081b4c72a61a3dd922b7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 10:16:33 compute-0 podman[266250]: 2025-12-05 10:16:33.136386348 +0000 UTC m=+0.220624927 container died e0722a4a57e129163cd6662ed94b684be8c73f5e59081b4c72a61a3dd922b7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rubin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 10:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-669a5f03fe32c82281d8792ca6c1fb76e190338cb081f78221b56b4fdc855a42-merged.mount: Deactivated successfully.
Dec 05 10:16:33 compute-0 podman[266250]: 2025-12-05 10:16:33.218802537 +0000 UTC m=+0.303041156 container remove e0722a4a57e129163cd6662ed94b684be8c73f5e59081b4c72a61a3dd922b7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 10:16:33 compute-0 systemd[1]: libpod-conmon-e0722a4a57e129163cd6662ed94b684be8c73f5e59081b4c72a61a3dd922b7e3.scope: Deactivated successfully.
Dec 05 10:16:33 compute-0 podman[266290]: 2025-12-05 10:16:33.405720568 +0000 UTC m=+0.053653829 container create e7543177e709a01efec308c399bc802b0eaa4f1f533712af2ee61b00a6f8d66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:16:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:33.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:33 compute-0 systemd[1]: Started libpod-conmon-e7543177e709a01efec308c399bc802b0eaa4f1f533712af2ee61b00a6f8d66a.scope.
Dec 05 10:16:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:16:33 compute-0 podman[266290]: 2025-12-05 10:16:33.383831603 +0000 UTC m=+0.031764904 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1fd3431aa215ffdf69394b699d46654bd8e924097c94e538dd7f0e3fee19dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1fd3431aa215ffdf69394b699d46654bd8e924097c94e538dd7f0e3fee19dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1fd3431aa215ffdf69394b699d46654bd8e924097c94e538dd7f0e3fee19dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1fd3431aa215ffdf69394b699d46654bd8e924097c94e538dd7f0e3fee19dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:33.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:33 compute-0 podman[266290]: 2025-12-05 10:16:33.605454426 +0000 UTC m=+0.253387767 container init e7543177e709a01efec308c399bc802b0eaa4f1f533712af2ee61b00a6f8d66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:16:33 compute-0 podman[266290]: 2025-12-05 10:16:33.613024202 +0000 UTC m=+0.260957493 container start e7543177e709a01efec308c399bc802b0eaa4f1f533712af2ee61b00a6f8d66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 10:16:33 compute-0 podman[266290]: 2025-12-05 10:16:33.627227118 +0000 UTC m=+0.275160389 container attach e7543177e709a01efec308c399bc802b0eaa4f1f533712af2ee61b00a6f8d66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 10:16:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:33.677Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:16:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:33.678Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:16:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:33.679Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:16:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:16:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:34 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:34 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2457575268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:16:34 compute-0 ceph-mon[74418]: pgmap v822: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:16:34 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3074154045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:16:34 compute-0 agitated_nash[266306]: [
Dec 05 10:16:34 compute-0 agitated_nash[266306]:     {
Dec 05 10:16:34 compute-0 agitated_nash[266306]:         "available": false,
Dec 05 10:16:34 compute-0 agitated_nash[266306]:         "being_replaced": false,
Dec 05 10:16:34 compute-0 agitated_nash[266306]:         "ceph_device_lvm": false,
Dec 05 10:16:34 compute-0 agitated_nash[266306]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:         "lsm_data": {},
Dec 05 10:16:34 compute-0 agitated_nash[266306]:         "lvs": [],
Dec 05 10:16:34 compute-0 agitated_nash[266306]:         "path": "/dev/sr0",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:         "rejected_reasons": [
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "Insufficient space (<5GB)",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "Has a FileSystem"
Dec 05 10:16:34 compute-0 agitated_nash[266306]:         ],
Dec 05 10:16:34 compute-0 agitated_nash[266306]:         "sys_api": {
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "actuators": null,
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "device_nodes": [
Dec 05 10:16:34 compute-0 agitated_nash[266306]:                 "sr0"
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             ],
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "devname": "sr0",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "human_readable_size": "482.00 KB",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "id_bus": "ata",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "model": "QEMU DVD-ROM",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "nr_requests": "2",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "parent": "/dev/sr0",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "partitions": {},
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "path": "/dev/sr0",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "removable": "1",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "rev": "2.5+",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "ro": "0",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "rotational": "1",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "sas_address": "",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "sas_device_handle": "",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "scheduler_mode": "mq-deadline",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "sectors": 0,
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "sectorsize": "2048",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "size": 493568.0,
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "support_discard": "2048",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "type": "disk",
Dec 05 10:16:34 compute-0 agitated_nash[266306]:             "vendor": "QEMU"
Dec 05 10:16:34 compute-0 agitated_nash[266306]:         }
Dec 05 10:16:34 compute-0 agitated_nash[266306]:     }
Dec 05 10:16:34 compute-0 agitated_nash[266306]: ]
Dec 05 10:16:34 compute-0 systemd[1]: libpod-e7543177e709a01efec308c399bc802b0eaa4f1f533712af2ee61b00a6f8d66a.scope: Deactivated successfully.
Dec 05 10:16:34 compute-0 podman[267579]: 2025-12-05 10:16:34.62037989 +0000 UTC m=+0.031049915 container died e7543177e709a01efec308c399bc802b0eaa4f1f533712af2ee61b00a6f8d66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 10:16:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 10:16:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef1fd3431aa215ffdf69394b699d46654bd8e924097c94e538dd7f0e3fee19dd-merged.mount: Deactivated successfully.
Dec 05 10:16:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 10:16:34 compute-0 podman[267579]: 2025-12-05 10:16:34.659143813 +0000 UTC m=+0.069813808 container remove e7543177e709a01efec308c399bc802b0eaa4f1f533712af2ee61b00a6f8d66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 10:16:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:34 compute-0 systemd[1]: libpod-conmon-e7543177e709a01efec308c399bc802b0eaa4f1f533712af2ee61b00a6f8d66a.scope: Deactivated successfully.
Dec 05 10:16:34 compute-0 sudo[266156]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:16:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:16:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:16:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:16:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:16:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:16:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:16:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:16:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:16:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:16:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:16:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:16:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:16:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:16:34 compute-0 sudo[267594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:16:34 compute-0 sudo[267594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:34 compute-0 sudo[267594]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:34 compute-0 sudo[267619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:16:34 compute-0 sudo[267619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:35 compute-0 podman[267685]: 2025-12-05 10:16:35.405357084 +0000 UTC m=+0.040552924 container create 14100430d2c0ac8b57be8a0f19730f029f27bf4a5eb85d5072118df743015092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lichterman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:16:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:35.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:35 compute-0 systemd[1]: Started libpod-conmon-14100430d2c0ac8b57be8a0f19730f029f27bf4a5eb85d5072118df743015092.scope.
Dec 05 10:16:35 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:16:35.462 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:16:35 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:16:35.463 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:16:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:16:35 compute-0 podman[267685]: 2025-12-05 10:16:35.38829563 +0000 UTC m=+0.023491490 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:16:35 compute-0 podman[267685]: 2025-12-05 10:16:35.490128217 +0000 UTC m=+0.125324067 container init 14100430d2c0ac8b57be8a0f19730f029f27bf4a5eb85d5072118df743015092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 10:16:35 compute-0 podman[267685]: 2025-12-05 10:16:35.500444668 +0000 UTC m=+0.135640508 container start 14100430d2c0ac8b57be8a0f19730f029f27bf4a5eb85d5072118df743015092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lichterman, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:16:35 compute-0 podman[267685]: 2025-12-05 10:16:35.503873121 +0000 UTC m=+0.139068971 container attach 14100430d2c0ac8b57be8a0f19730f029f27bf4a5eb85d5072118df743015092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 10:16:35 compute-0 funny_lichterman[267701]: 167 167
Dec 05 10:16:35 compute-0 systemd[1]: libpod-14100430d2c0ac8b57be8a0f19730f029f27bf4a5eb85d5072118df743015092.scope: Deactivated successfully.
Dec 05 10:16:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:35.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:35 compute-0 podman[267706]: 2025-12-05 10:16:35.547532178 +0000 UTC m=+0.025796072 container died 14100430d2c0ac8b57be8a0f19730f029f27bf4a5eb85d5072118df743015092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lichterman, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-406010927cf4b3dc8773d989e4c5d6911f78c803a4c5117bf1f44b922e1560b6-merged.mount: Deactivated successfully.
Dec 05 10:16:35 compute-0 podman[267706]: 2025-12-05 10:16:35.591664156 +0000 UTC m=+0.069928010 container remove 14100430d2c0ac8b57be8a0f19730f029f27bf4a5eb85d5072118df743015092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 10:16:35 compute-0 systemd[1]: libpod-conmon-14100430d2c0ac8b57be8a0f19730f029f27bf4a5eb85d5072118df743015092.scope: Deactivated successfully.
Dec 05 10:16:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:35] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:16:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:35] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:16:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:16:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:16:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:16:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:16:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:16:35 compute-0 podman[267728]: 2025-12-05 10:16:35.783315826 +0000 UTC m=+0.050279678 container create 0bf20bbcd4eeab31053c5025c1308cc4986400f60315754e420baa68f71a4ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 10:16:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:16:35 compute-0 systemd[1]: Started libpod-conmon-0bf20bbcd4eeab31053c5025c1308cc4986400f60315754e420baa68f71a4ba7.scope.
Dec 05 10:16:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9303f2a518df6d2692508e2fb19ad0e95946d05d26c18cd630e814a0b7012f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9303f2a518df6d2692508e2fb19ad0e95946d05d26c18cd630e814a0b7012f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9303f2a518df6d2692508e2fb19ad0e95946d05d26c18cd630e814a0b7012f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:35 compute-0 podman[267728]: 2025-12-05 10:16:35.759863659 +0000 UTC m=+0.026827531 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9303f2a518df6d2692508e2fb19ad0e95946d05d26c18cd630e814a0b7012f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9303f2a518df6d2692508e2fb19ad0e95946d05d26c18cd630e814a0b7012f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:35 compute-0 podman[267728]: 2025-12-05 10:16:35.874591856 +0000 UTC m=+0.141555718 container init 0bf20bbcd4eeab31053c5025c1308cc4986400f60315754e420baa68f71a4ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:16:35 compute-0 podman[267728]: 2025-12-05 10:16:35.889093011 +0000 UTC m=+0.156056863 container start 0bf20bbcd4eeab31053c5025c1308cc4986400f60315754e420baa68f71a4ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 10:16:35 compute-0 podman[267728]: 2025-12-05 10:16:35.895138745 +0000 UTC m=+0.162102587 container attach 0bf20bbcd4eeab31053c5025c1308cc4986400f60315754e420baa68f71a4ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:16:36 compute-0 gallant_dubinsky[267745]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:16:36 compute-0 gallant_dubinsky[267745]: --> All data devices are unavailable
Dec 05 10:16:36 compute-0 systemd[1]: libpod-0bf20bbcd4eeab31053c5025c1308cc4986400f60315754e420baa68f71a4ba7.scope: Deactivated successfully.
Dec 05 10:16:36 compute-0 podman[267728]: 2025-12-05 10:16:36.298537288 +0000 UTC m=+0.565501160 container died 0bf20bbcd4eeab31053c5025c1308cc4986400f60315754e420baa68f71a4ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 10:16:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-af9303f2a518df6d2692508e2fb19ad0e95946d05d26c18cd630e814a0b7012f-merged.mount: Deactivated successfully.
Dec 05 10:16:36 compute-0 podman[267728]: 2025-12-05 10:16:36.363773721 +0000 UTC m=+0.630737583 container remove 0bf20bbcd4eeab31053c5025c1308cc4986400f60315754e420baa68f71a4ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:16:36 compute-0 systemd[1]: libpod-conmon-0bf20bbcd4eeab31053c5025c1308cc4986400f60315754e420baa68f71a4ba7.scope: Deactivated successfully.
Dec 05 10:16:36 compute-0 sudo[267619]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:36 compute-0 sudo[267776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:16:36 compute-0 sudo[267776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:36 compute-0 sudo[267776]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:36 compute-0 sudo[267801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:16:36 compute-0 sudo[267801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:36 compute-0 ceph-mon[74418]: pgmap v823: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:16:37 compute-0 podman[267868]: 2025-12-05 10:16:37.187279582 +0000 UTC m=+0.120247459 container create 7071c684152b26ed6c63557c30a353f5e994c1daeeb171eb8688cbde396d20f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 10:16:37 compute-0 podman[267868]: 2025-12-05 10:16:37.0988835 +0000 UTC m=+0.031851377 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:16:37 compute-0 systemd[1]: Started libpod-conmon-7071c684152b26ed6c63557c30a353f5e994c1daeeb171eb8688cbde396d20f1.scope.
Dec 05 10:16:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:16:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:37.371Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:16:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:37.372Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:16:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:16:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:37.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:16:37 compute-0 podman[267868]: 2025-12-05 10:16:37.441493631 +0000 UTC m=+0.374461558 container init 7071c684152b26ed6c63557c30a353f5e994c1daeeb171eb8688cbde396d20f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:16:37 compute-0 podman[267868]: 2025-12-05 10:16:37.448776368 +0000 UTC m=+0.381744205 container start 7071c684152b26ed6c63557c30a353f5e994c1daeeb171eb8688cbde396d20f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 10:16:37 compute-0 systemd[1]: libpod-7071c684152b26ed6c63557c30a353f5e994c1daeeb171eb8688cbde396d20f1.scope: Deactivated successfully.
Dec 05 10:16:37 compute-0 magical_turing[267885]: 167 167
Dec 05 10:16:37 compute-0 conmon[267885]: conmon 7071c684152b26ed6c63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7071c684152b26ed6c63557c30a353f5e994c1daeeb171eb8688cbde396d20f1.scope/container/memory.events
Dec 05 10:16:37 compute-0 podman[267868]: 2025-12-05 10:16:37.459937272 +0000 UTC m=+0.392905209 container attach 7071c684152b26ed6c63557c30a353f5e994c1daeeb171eb8688cbde396d20f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 10:16:37 compute-0 podman[267868]: 2025-12-05 10:16:37.460648682 +0000 UTC m=+0.393616569 container died 7071c684152b26ed6c63557c30a353f5e994c1daeeb171eb8688cbde396d20f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_turing, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1fcb599e3eaa8bc8b7a0ed8e8014a01e9744f54d073cf2a345fa5fd9a29990b-merged.mount: Deactivated successfully.
Dec 05 10:16:37 compute-0 podman[267868]: 2025-12-05 10:16:37.50476758 +0000 UTC m=+0.437735427 container remove 7071c684152b26ed6c63557c30a353f5e994c1daeeb171eb8688cbde396d20f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 10:16:37 compute-0 systemd[1]: libpod-conmon-7071c684152b26ed6c63557c30a353f5e994c1daeeb171eb8688cbde396d20f1.scope: Deactivated successfully.
Dec 05 10:16:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:37.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:37 compute-0 podman[267908]: 2025-12-05 10:16:37.690402066 +0000 UTC m=+0.057225857 container create 0ab59aea13e000d30d1aa875f79ab1e912900985bd770fdc6e37f4cbc82646e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:16:37 compute-0 systemd[1]: Started libpod-conmon-0ab59aea13e000d30d1aa875f79ab1e912900985bd770fdc6e37f4cbc82646e9.scope.
Dec 05 10:16:37 compute-0 podman[267908]: 2025-12-05 10:16:37.665482049 +0000 UTC m=+0.032305920 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:16:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:16:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35be0f50af192fdc5b641ce5a7224ec5d61437879786b56f9f4fdb4519fa4fa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35be0f50af192fdc5b641ce5a7224ec5d61437879786b56f9f4fdb4519fa4fa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35be0f50af192fdc5b641ce5a7224ec5d61437879786b56f9f4fdb4519fa4fa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35be0f50af192fdc5b641ce5a7224ec5d61437879786b56f9f4fdb4519fa4fa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 10:16:37 compute-0 podman[267908]: 2025-12-05 10:16:37.788864592 +0000 UTC m=+0.155688483 container init 0ab59aea13e000d30d1aa875f79ab1e912900985bd770fdc6e37f4cbc82646e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:16:37 compute-0 podman[267908]: 2025-12-05 10:16:37.801343231 +0000 UTC m=+0.168167022 container start 0ab59aea13e000d30d1aa875f79ab1e912900985bd770fdc6e37f4cbc82646e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:16:37 compute-0 podman[267908]: 2025-12-05 10:16:37.809760919 +0000 UTC m=+0.176584820 container attach 0ab59aea13e000d30d1aa875f79ab1e912900985bd770fdc6e37f4cbc82646e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 05 10:16:38 compute-0 focused_shaw[267924]: {
Dec 05 10:16:38 compute-0 focused_shaw[267924]:     "1": [
Dec 05 10:16:38 compute-0 focused_shaw[267924]:         {
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             "devices": [
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "/dev/loop3"
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             ],
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             "lv_name": "ceph_lv0",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             "lv_size": "21470642176",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             "name": "ceph_lv0",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             "tags": {
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.cluster_name": "ceph",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.crush_device_class": "",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.encrypted": "0",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.osd_id": "1",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.type": "block",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.vdo": "0",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:                 "ceph.with_tpm": "0"
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             },
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             "type": "block",
Dec 05 10:16:38 compute-0 focused_shaw[267924]:             "vg_name": "ceph_vg0"
Dec 05 10:16:38 compute-0 focused_shaw[267924]:         }
Dec 05 10:16:38 compute-0 focused_shaw[267924]:     ]
Dec 05 10:16:38 compute-0 focused_shaw[267924]: }
Dec 05 10:16:38 compute-0 systemd[1]: libpod-0ab59aea13e000d30d1aa875f79ab1e912900985bd770fdc6e37f4cbc82646e9.scope: Deactivated successfully.
Dec 05 10:16:38 compute-0 podman[267908]: 2025-12-05 10:16:38.120564186 +0000 UTC m=+0.487387987 container died 0ab59aea13e000d30d1aa875f79ab1e912900985bd770fdc6e37f4cbc82646e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 05 10:16:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-35be0f50af192fdc5b641ce5a7224ec5d61437879786b56f9f4fdb4519fa4fa1-merged.mount: Deactivated successfully.
Dec 05 10:16:38 compute-0 podman[267908]: 2025-12-05 10:16:38.16669149 +0000 UTC m=+0.533515291 container remove 0ab59aea13e000d30d1aa875f79ab1e912900985bd770fdc6e37f4cbc82646e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:16:38 compute-0 systemd[1]: libpod-conmon-0ab59aea13e000d30d1aa875f79ab1e912900985bd770fdc6e37f4cbc82646e9.scope: Deactivated successfully.
Dec 05 10:16:38 compute-0 sudo[267801]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:38 compute-0 sudo[267947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:16:38 compute-0 sudo[267947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:38 compute-0 sudo[267947]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:38 compute-0 sudo[267972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:16:38 compute-0 sudo[267972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:38 compute-0 podman[268039]: 2025-12-05 10:16:38.791987014 +0000 UTC m=+0.049786774 container create fe938cbada0f7fe2050151c5d98fdffcd00f26595e3568907be3d8467ac527b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cerf, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:16:38 compute-0 systemd[1]: Started libpod-conmon-fe938cbada0f7fe2050151c5d98fdffcd00f26595e3568907be3d8467ac527b3.scope.
Dec 05 10:16:38 compute-0 podman[268039]: 2025-12-05 10:16:38.765177995 +0000 UTC m=+0.022977765 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:16:38 compute-0 ceph-mon[74418]: pgmap v824: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 10:16:38 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:16:38 compute-0 podman[268039]: 2025-12-05 10:16:38.88051992 +0000 UTC m=+0.138319730 container init fe938cbada0f7fe2050151c5d98fdffcd00f26595e3568907be3d8467ac527b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cerf, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:16:38 compute-0 podman[268039]: 2025-12-05 10:16:38.888761474 +0000 UTC m=+0.146561234 container start fe938cbada0f7fe2050151c5d98fdffcd00f26595e3568907be3d8467ac527b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cerf, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:16:38 compute-0 podman[268039]: 2025-12-05 10:16:38.892585328 +0000 UTC m=+0.150385108 container attach fe938cbada0f7fe2050151c5d98fdffcd00f26595e3568907be3d8467ac527b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cerf, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Dec 05 10:16:38 compute-0 suspicious_cerf[268055]: 167 167
Dec 05 10:16:38 compute-0 systemd[1]: libpod-fe938cbada0f7fe2050151c5d98fdffcd00f26595e3568907be3d8467ac527b3.scope: Deactivated successfully.
Dec 05 10:16:38 compute-0 podman[268039]: 2025-12-05 10:16:38.894202842 +0000 UTC m=+0.152002622 container died fe938cbada0f7fe2050151c5d98fdffcd00f26595e3568907be3d8467ac527b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 10:16:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c21d8a1814aedcef851ee361e2f0fe87f4e7f7fb5bdc5a60f8949c2a733e3847-merged.mount: Deactivated successfully.
Dec 05 10:16:38 compute-0 podman[268039]: 2025-12-05 10:16:38.927798105 +0000 UTC m=+0.185597885 container remove fe938cbada0f7fe2050151c5d98fdffcd00f26595e3568907be3d8467ac527b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 05 10:16:38 compute-0 systemd[1]: libpod-conmon-fe938cbada0f7fe2050151c5d98fdffcd00f26595e3568907be3d8467ac527b3.scope: Deactivated successfully.
Dec 05 10:16:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:39 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:39 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:39 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:39 compute-0 podman[268080]: 2025-12-05 10:16:39.095648257 +0000 UTC m=+0.048924570 container create c2968c69e584a53fe5647ebda6920535272394cdc646fb16be3728777ec3ede5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_thompson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:16:39 compute-0 systemd[1]: Started libpod-conmon-c2968c69e584a53fe5647ebda6920535272394cdc646fb16be3728777ec3ede5.scope.
Dec 05 10:16:39 compute-0 podman[268080]: 2025-12-05 10:16:39.071945663 +0000 UTC m=+0.025221996 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:16:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35271af9cbf56b6b875512b1c971fc388de9a94d9e2c494836936255fcbedf33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35271af9cbf56b6b875512b1c971fc388de9a94d9e2c494836936255fcbedf33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35271af9cbf56b6b875512b1c971fc388de9a94d9e2c494836936255fcbedf33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35271af9cbf56b6b875512b1c971fc388de9a94d9e2c494836936255fcbedf33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:39 compute-0 podman[268080]: 2025-12-05 10:16:39.192443048 +0000 UTC m=+0.145719381 container init c2968c69e584a53fe5647ebda6920535272394cdc646fb16be3728777ec3ede5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 05 10:16:39 compute-0 podman[268080]: 2025-12-05 10:16:39.203378325 +0000 UTC m=+0.156654658 container start c2968c69e584a53fe5647ebda6920535272394cdc646fb16be3728777ec3ede5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:16:39 compute-0 podman[268080]: 2025-12-05 10:16:39.207722493 +0000 UTC m=+0.160998826 container attach c2968c69e584a53fe5647ebda6920535272394cdc646fb16be3728777ec3ede5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_thompson, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:16:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:39.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:39.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 05 10:16:39 compute-0 lvm[268172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:16:39 compute-0 lvm[268172]: VG ceph_vg0 finished
Dec 05 10:16:39 compute-0 distracted_thompson[268097]: {}
Dec 05 10:16:40 compute-0 systemd[1]: libpod-c2968c69e584a53fe5647ebda6920535272394cdc646fb16be3728777ec3ede5.scope: Deactivated successfully.
Dec 05 10:16:40 compute-0 systemd[1]: libpod-c2968c69e584a53fe5647ebda6920535272394cdc646fb16be3728777ec3ede5.scope: Consumed 1.306s CPU time.
Dec 05 10:16:40 compute-0 podman[268080]: 2025-12-05 10:16:40.01197321 +0000 UTC m=+0.965249543 container died c2968c69e584a53fe5647ebda6920535272394cdc646fb16be3728777ec3ede5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 05 10:16:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-35271af9cbf56b6b875512b1c971fc388de9a94d9e2c494836936255fcbedf33-merged.mount: Deactivated successfully.
Dec 05 10:16:40 compute-0 podman[268080]: 2025-12-05 10:16:40.074953412 +0000 UTC m=+1.028229725 container remove c2968c69e584a53fe5647ebda6920535272394cdc646fb16be3728777ec3ede5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_thompson, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 10:16:40 compute-0 systemd[1]: libpod-conmon-c2968c69e584a53fe5647ebda6920535272394cdc646fb16be3728777ec3ede5.scope: Deactivated successfully.
Dec 05 10:16:40 compute-0 sudo[267972]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:16:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:16:40 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:40 compute-0 sudo[268189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:16:40 compute-0 sudo[268189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:40 compute-0 sudo[268189]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:41.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:41 compute-0 ceph-mon[74418]: pgmap v825: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 05 10:16:41 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:41 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:16:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:16:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:41.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:16:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:16:42 compute-0 ceph-mon[74418]: pgmap v826: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:16:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:16:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:16:43 compute-0 podman[268217]: 2025-12-05 10:16:43.411727308 +0000 UTC m=+0.065733047 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 10:16:43 compute-0 podman[268218]: 2025-12-05 10:16:43.41767966 +0000 UTC m=+0.072554373 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:16:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:43.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:43.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:43.680Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:16:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:16:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:16:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:44 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:44 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:44 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:44 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:16:44.465 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:16:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:45.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:45 compute-0 ceph-mon[74418]: pgmap v827: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:16:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:45.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:45] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec 05 10:16:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:45] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec 05 10:16:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:16:46 compute-0 ceph-mon[74418]: pgmap v828: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:16:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:47.373Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:16:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:47.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:47 compute-0 podman[268260]: 2025-12-05 10:16:47.475565434 +0000 UTC m=+0.124691500 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 05 10:16:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:47.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:16:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:48 compute-0 ceph-mon[74418]: pgmap v829: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:16:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:49 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:49 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:49 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:49.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:49.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 100 op/s
Dec 05 10:16:51 compute-0 ceph-mon[74418]: pgmap v830: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 100 op/s
Dec 05 10:16:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:51.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:51.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.0 MiB/s wr, 26 op/s
Dec 05 10:16:52 compute-0 ceph-mon[74418]: pgmap v831: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.0 MiB/s wr, 26 op/s
Dec 05 10:16:52 compute-0 sudo[268295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:16:52 compute-0 sudo[268295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:16:52 compute-0 sudo[268295]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:53.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:53.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:53.682Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:16:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Dec 05 10:16:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:54 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:54 compute-0 ceph-mon[74418]: pgmap v832: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Dec 05 10:16:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:55.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:55.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:55] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec 05 10:16:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:16:55] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec 05 10:16:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:16:56 compute-0 ceph-mon[74418]: pgmap v833: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:16:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:16:57.383Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:16:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:57.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:57.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:16:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:16:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:16:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:16:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:16:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:16:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:16:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:16:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:16:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/742876943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:16:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/742876943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:16:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:58 compute-0 nova_compute[257087]: 2025-12-05 10:16:58.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:58 compute-0 ceph-mon[74418]: pgmap v834: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:16:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:16:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2625676847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:16:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:16:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:16:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:16:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:16:59 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:16:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:16:59.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:59 compute-0 nova_compute[257087]: 2025-12-05 10:16:59.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:59 compute-0 nova_compute[257087]: 2025-12-05 10:16:59.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:16:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:16:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:16:59.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:16:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:16:59 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3712043050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:00 compute-0 nova_compute[257087]: 2025-12-05 10:17:00.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:00 compute-0 ceph-mon[74418]: pgmap v835: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:17:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:01.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:01 compute-0 nova_compute[257087]: 2025-12-05 10:17:01.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:01 compute-0 nova_compute[257087]: 2025-12-05 10:17:01.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:17:01 compute-0 nova_compute[257087]: 2025-12-05 10:17:01.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:17:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:01.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:01 compute-0 nova_compute[257087]: 2025-12-05 10:17:01.659 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:17:01 compute-0 nova_compute[257087]: 2025-12-05 10:17:01.660 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:01 compute-0 nova_compute[257087]: 2025-12-05 10:17:01.660 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:01 compute-0 nova_compute[257087]: 2025-12-05 10:17:01.704 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:01 compute-0 nova_compute[257087]: 2025-12-05 10:17:01.705 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:01 compute-0 nova_compute[257087]: 2025-12-05 10:17:01.705 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:01 compute-0 nova_compute[257087]: 2025-12-05 10:17:01.706 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:17:01 compute-0 nova_compute[257087]: 2025-12-05 10:17:01.706 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 108 KiB/s wr, 39 op/s
Dec 05 10:17:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:17:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1744254551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:02 compute-0 nova_compute[257087]: 2025-12-05 10:17:02.170 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:02 compute-0 nova_compute[257087]: 2025-12-05 10:17:02.402 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:17:02 compute-0 nova_compute[257087]: 2025-12-05 10:17:02.403 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4905MB free_disk=59.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:17:02 compute-0 nova_compute[257087]: 2025-12-05 10:17:02.404 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:02 compute-0 nova_compute[257087]: 2025-12-05 10:17:02.404 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:02 compute-0 nova_compute[257087]: 2025-12-05 10:17:02.468 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:17:02 compute-0 nova_compute[257087]: 2025-12-05 10:17:02.469 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:17:02 compute-0 nova_compute[257087]: 2025-12-05 10:17:02.485 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:17:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864767544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:02 compute-0 nova_compute[257087]: 2025-12-05 10:17:02.945 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:02 compute-0 nova_compute[257087]: 2025-12-05 10:17:02.951 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:17:03 compute-0 nova_compute[257087]: 2025-12-05 10:17:03.054 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:17:03 compute-0 ceph-mon[74418]: pgmap v836: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 108 KiB/s wr, 39 op/s
Dec 05 10:17:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1744254551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2864767544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:03 compute-0 nova_compute[257087]: 2025-12-05 10:17:03.058 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:17:03 compute-0 nova_compute[257087]: 2025-12-05 10:17:03.059 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:03.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:17:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:17:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:03.684Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:17:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 108 KiB/s wr, 39 op/s
Dec 05 10:17:03 compute-0 nova_compute[257087]: 2025-12-05 10:17:03.928 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:03 compute-0 nova_compute[257087]: 2025-12-05 10:17:03.929 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:03 compute-0 nova_compute[257087]: 2025-12-05 10:17:03.929 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:17:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:04 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:04 compute-0 ceph-mon[74418]: pgmap v837: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 108 KiB/s wr, 39 op/s
Dec 05 10:17:05 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/899752764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:17:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:05.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:17:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:05.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:05] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec 05 10:17:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:05] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec 05 10:17:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 16 KiB/s wr, 19 op/s
Dec 05 10:17:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1575976537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:06 compute-0 ceph-mon[74418]: pgmap v838: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 16 KiB/s wr, 19 op/s
Dec 05 10:17:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:07.384Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:17:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:07.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:07.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 12 KiB/s wr, 1 op/s
Dec 05 10:17:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:08 compute-0 ceph-mon[74418]: pgmap v839: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 12 KiB/s wr, 1 op/s
Dec 05 10:17:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:09 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:09.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:09.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:09 compute-0 nova_compute[257087]: 2025-12-05 10:17:09.674 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Acquiring lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:09 compute-0 nova_compute[257087]: 2025-12-05 10:17:09.675 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:09 compute-0 nova_compute[257087]: 2025-12-05 10:17:09.704 257094 DEBUG nova.compute.manager [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 10:17:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 05 10:17:09 compute-0 nova_compute[257087]: 2025-12-05 10:17:09.835 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:09 compute-0 nova_compute[257087]: 2025-12-05 10:17:09.836 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:09 compute-0 nova_compute[257087]: 2025-12-05 10:17:09.844 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 10:17:09 compute-0 nova_compute[257087]: 2025-12-05 10:17:09.844 257094 INFO nova.compute.claims [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Claim successful on node compute-0.ctlplane.example.com
Dec 05 10:17:09 compute-0 nova_compute[257087]: 2025-12-05 10:17:09.971 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:17:10 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002921375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:10 compute-0 nova_compute[257087]: 2025-12-05 10:17:10.493 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:10 compute-0 nova_compute[257087]: 2025-12-05 10:17:10.502 257094 DEBUG nova.compute.provider_tree [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:17:10 compute-0 nova_compute[257087]: 2025-12-05 10:17:10.530 257094 DEBUG nova.scheduler.client.report [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:17:10 compute-0 nova_compute[257087]: 2025-12-05 10:17:10.558 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:10 compute-0 nova_compute[257087]: 2025-12-05 10:17:10.559 257094 DEBUG nova.compute.manager [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 10:17:10 compute-0 nova_compute[257087]: 2025-12-05 10:17:10.876 257094 DEBUG nova.compute.manager [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 10:17:10 compute-0 nova_compute[257087]: 2025-12-05 10:17:10.877 257094 DEBUG nova.network.neutron [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 10:17:10 compute-0 ceph-mon[74418]: pgmap v840: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 05 10:17:10 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2002921375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:10 compute-0 nova_compute[257087]: 2025-12-05 10:17:10.908 257094 INFO nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 10:17:10 compute-0 nova_compute[257087]: 2025-12-05 10:17:10.926 257094 DEBUG nova.compute.manager [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.019 257094 DEBUG nova.compute.manager [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.022 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.023 257094 INFO nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Creating image(s)
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.065 257094 DEBUG nova.storage.rbd_utils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] rbd image d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.101 257094 DEBUG nova.storage.rbd_utils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] rbd image d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.135 257094 DEBUG nova.storage.rbd_utils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] rbd image d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.140 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Acquiring lock "c37c5df068678d7861d8fa0d8aed9df6a189daab" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.141 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "c37c5df068678d7861d8fa0d8aed9df6a189daab" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:17:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:11.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.517 257094 WARNING oslo_policy.policy [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.518 257094 WARNING oslo_policy.policy [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.524 257094 DEBUG nova.policy [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '769d2179358946d682e622908baeec49', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '838b1c7df82149408a85854af5a04909', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 10:17:11 compute-0 nova_compute[257087]: 2025-12-05 10:17:11.549 257094 DEBUG nova.virt.libvirt.imagebackend [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Image locations are: [{'url': 'rbd://3c63ce0f-5206-59ae-8381-b67d0b6424b5/images/4a6d0006-e2d8-47cd-a44b-309518215a42/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://3c63ce0f-5206-59ae-8381-b67d0b6424b5/images/4a6d0006-e2d8-47cd-a44b-309518215a42/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 05 10:17:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:11.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 3.3 KiB/s wr, 1 op/s
Dec 05 10:17:12 compute-0 ceph-mon[74418]: pgmap v841: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 3.3 KiB/s wr, 1 op/s
Dec 05 10:17:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:17:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:17:12 compute-0 sudo[268460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:17:12 compute-0 sudo[268460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:12 compute-0 sudo[268460]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:17:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:13.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:13 compute-0 nova_compute[257087]: 2025-12-05 10:17:13.572 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c37c5df068678d7861d8fa0d8aed9df6a189daab.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:13.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:13 compute-0 nova_compute[257087]: 2025-12-05 10:17:13.656 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c37c5df068678d7861d8fa0d8aed9df6a189daab.part --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:13 compute-0 nova_compute[257087]: 2025-12-05 10:17:13.658 257094 DEBUG nova.virt.images [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] 4a6d0006-e2d8-47cd-a44b-309518215a42 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 05 10:17:13 compute-0 nova_compute[257087]: 2025-12-05 10:17:13.659 257094 DEBUG nova.privsep.utils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 05 10:17:13 compute-0 nova_compute[257087]: 2025-12-05 10:17:13.659 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c37c5df068678d7861d8fa0d8aed9df6a189daab.part /var/lib/nova/instances/_base/c37c5df068678d7861d8fa0d8aed9df6a189daab.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:13.686Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:17:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.4 KiB/s wr, 7 op/s
Dec 05 10:17:13 compute-0 nova_compute[257087]: 2025-12-05 10:17:13.933 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c37c5df068678d7861d8fa0d8aed9df6a189daab.part /var/lib/nova/instances/_base/c37c5df068678d7861d8fa0d8aed9df6a189daab.converted" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:13 compute-0 nova_compute[257087]: 2025-12-05 10:17:13.940 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c37c5df068678d7861d8fa0d8aed9df6a189daab.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:13.999 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c37c5df068678d7861d8fa0d8aed9df6a189daab.converted --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.000 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "c37c5df068678d7861d8fa0d8aed9df6a189daab" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:14 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.031 257094 DEBUG nova.storage.rbd_utils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] rbd image d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.038 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c37c5df068678d7861d8fa0d8aed9df6a189daab d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.063 257094 DEBUG nova.network.neutron [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Successfully created port: aa273cb3-e801-441e-be4f-c5722f88c59c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.470 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c37c5df068678d7861d8fa0d8aed9df6a189daab d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:14 compute-0 podman[268536]: 2025-12-05 10:17:14.478517603 +0000 UTC m=+0.132934494 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 05 10:17:14 compute-0 podman[268537]: 2025-12-05 10:17:14.487441345 +0000 UTC m=+0.139900643 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 10:17:14 compute-0 ceph-mon[74418]: pgmap v842: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.4 KiB/s wr, 7 op/s
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.576 257094 DEBUG nova.storage.rbd_utils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] resizing rbd image d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.748 257094 DEBUG nova.objects.instance [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lazy-loading 'migration_context' on Instance uuid d5af7919-6b0b-4f37-9f5b-ed2b11e11a85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.984 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.985 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Ensure instance console log exists: /var/lib/nova/instances/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.986 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.987 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:14 compute-0 nova_compute[257087]: 2025-12-05 10:17:14.988 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:15.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:15.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:15] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:17:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:15] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:17:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 KiB/s wr, 7 op/s
Dec 05 10:17:16 compute-0 ceph-mon[74418]: pgmap v843: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 KiB/s wr, 7 op/s
Dec 05 10:17:17 compute-0 nova_compute[257087]: 2025-12-05 10:17:17.176 257094 DEBUG nova.network.neutron [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Successfully updated port: aa273cb3-e801-441e-be4f-c5722f88c59c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 10:17:17 compute-0 nova_compute[257087]: 2025-12-05 10:17:17.191 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Acquiring lock "refresh_cache-d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:17:17 compute-0 nova_compute[257087]: 2025-12-05 10:17:17.191 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Acquired lock "refresh_cache-d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:17:17 compute-0 nova_compute[257087]: 2025-12-05 10:17:17.192 257094 DEBUG nova.network.neutron [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 10:17:17 compute-0 nova_compute[257087]: 2025-12-05 10:17:17.297 257094 DEBUG nova.compute.manager [req-30452dd6-45d5-4b85-bdc0-573b6fc624d7 req-2464f491-204a-4230-972c-5b4b081f746b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Received event network-changed-aa273cb3-e801-441e-be4f-c5722f88c59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:17:17 compute-0 nova_compute[257087]: 2025-12-05 10:17:17.298 257094 DEBUG nova.compute.manager [req-30452dd6-45d5-4b85-bdc0-573b6fc624d7 req-2464f491-204a-4230-972c-5b4b081f746b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Refreshing instance network info cache due to event network-changed-aa273cb3-e801-441e-be4f-c5722f88c59c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 10:17:17 compute-0 nova_compute[257087]: 2025-12-05 10:17:17.298 257094 DEBUG oslo_concurrency.lockutils [req-30452dd6-45d5-4b85-bdc0-573b6fc624d7 req-2464f491-204a-4230-972c-5b4b081f746b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Acquiring lock "refresh_cache-d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:17:17 compute-0 nova_compute[257087]: 2025-12-05 10:17:17.378 257094 DEBUG nova.network.neutron [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 10:17:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:17.386Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:17:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:17.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:17.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 KiB/s wr, 7 op/s
Dec 05 10:17:18 compute-0 podman[268652]: 2025-12-05 10:17:18.434775005 +0000 UTC m=+0.101849589 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:17:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.837 257094 DEBUG nova.network.neutron [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Updating instance_info_cache with network_info: [{"id": "aa273cb3-e801-441e-be4f-c5722f88c59c", "address": "fa:16:3e:e5:31:3f", "network": {"id": "c4d0bdd2-23f5-4062-a9f2-c5c372333fcf", "bridge": "br-int", "label": "tempest-network-smoke--459715102", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "838b1c7df82149408a85854af5a04909", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa273cb3-e8", "ovs_interfaceid": "aa273cb3-e801-441e-be4f-c5722f88c59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.867 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Releasing lock "refresh_cache-d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.868 257094 DEBUG nova.compute.manager [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Instance network_info: |[{"id": "aa273cb3-e801-441e-be4f-c5722f88c59c", "address": "fa:16:3e:e5:31:3f", "network": {"id": "c4d0bdd2-23f5-4062-a9f2-c5c372333fcf", "bridge": "br-int", "label": "tempest-network-smoke--459715102", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "838b1c7df82149408a85854af5a04909", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa273cb3-e8", "ovs_interfaceid": "aa273cb3-e801-441e-be4f-c5722f88c59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.868 257094 DEBUG oslo_concurrency.lockutils [req-30452dd6-45d5-4b85-bdc0-573b6fc624d7 req-2464f491-204a-4230-972c-5b4b081f746b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Acquired lock "refresh_cache-d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.869 257094 DEBUG nova.network.neutron [req-30452dd6-45d5-4b85-bdc0-573b6fc624d7 req-2464f491-204a-4230-972c-5b4b081f746b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Refreshing network info cache for port aa273cb3-e801-441e-be4f-c5722f88c59c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.873 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Start _get_guest_xml network_info=[{"id": "aa273cb3-e801-441e-be4f-c5722f88c59c", "address": "fa:16:3e:e5:31:3f", "network": {"id": "c4d0bdd2-23f5-4062-a9f2-c5c372333fcf", "bridge": "br-int", "label": "tempest-network-smoke--459715102", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "838b1c7df82149408a85854af5a04909", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa273cb3-e8", "ovs_interfaceid": "aa273cb3-e801-441e-be4f-c5722f88c59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T10:12:19Z,direct_url=<?>,disk_format='qcow2',id=4a6d0006-e2d8-47cd-a44b-309518215a42,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='096a8b53d5eb4713bd6967b82ab963be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T10:12:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': '4a6d0006-e2d8-47cd-a44b-309518215a42'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.882 257094 WARNING nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.894 257094 DEBUG nova.virt.libvirt.host [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.895 257094 DEBUG nova.virt.libvirt.host [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 10:17:18 compute-0 ceph-mon[74418]: pgmap v844: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 KiB/s wr, 7 op/s
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.902 257094 DEBUG nova.virt.libvirt.host [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.903 257094 DEBUG nova.virt.libvirt.host [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.904 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.904 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T10:12:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ee406824-2b20-4139-86d1-eac63254f83a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T10:12:19Z,direct_url=<?>,disk_format='qcow2',id=4a6d0006-e2d8-47cd-a44b-309518215a42,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='096a8b53d5eb4713bd6967b82ab963be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T10:12:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.905 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.905 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.905 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.906 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.906 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.906 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.906 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.906 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.906 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.907 257094 DEBUG nova.virt.hardware [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.912 257094 DEBUG nova.privsep.utils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 05 10:17:18 compute-0 nova_compute[257087]: 2025-12-05 10:17:18.913 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:19 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:19 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:19 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:19 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:17:19 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/961918556' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:17:19 compute-0 nova_compute[257087]: 2025-12-05 10:17:19.456 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:19 compute-0 nova_compute[257087]: 2025-12-05 10:17:19.493 257094 DEBUG nova.storage.rbd_utils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] rbd image d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:19 compute-0 nova_compute[257087]: 2025-12-05 10:17:19.499 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:17:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:17:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Dec 05 10:17:19 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/961918556' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:17:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:17:19 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583325864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.004 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.007 257094 DEBUG nova.virt.libvirt.vif [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T10:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2015202476',display_name='tempest-TestNetworkBasicOps-server-2015202476',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2015202476',id=5,image_ref='4a6d0006-e2d8-47cd-a44b-309518215a42',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBERgToEyV2W8KKM4rKMwDAoJ1Hw78zzJ0gcjKRolKcqupsdl1NMGZNiHIOWKfB7s8QyL+/5bbhT6Fx7YkgeXNC08RMQY+TxJd2lkJgkLysEUh0JEcMaGRFjc7I4wY0ZcSA==',key_name='tempest-TestNetworkBasicOps-1454695835',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='838b1c7df82149408a85854af5a04909',ramdisk_id='',reservation_id='r-cz2ocla0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4a6d0006-e2d8-47cd-a44b-309518215a42',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-410983719',owner_user_name='tempest-TestNetworkBasicOps-410983719-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T10:17:10Z,user_data=None,user_id='769d2179358946d682e622908baeec49',uuid=d5af7919-6b0b-4f37-9f5b-ed2b11e11a85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa273cb3-e801-441e-be4f-c5722f88c59c", "address": "fa:16:3e:e5:31:3f", "network": {"id": "c4d0bdd2-23f5-4062-a9f2-c5c372333fcf", "bridge": "br-int", "label": "tempest-network-smoke--459715102", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "838b1c7df82149408a85854af5a04909", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa273cb3-e8", "ovs_interfaceid": "aa273cb3-e801-441e-be4f-c5722f88c59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.007 257094 DEBUG nova.network.os_vif_util [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Converting VIF {"id": "aa273cb3-e801-441e-be4f-c5722f88c59c", "address": "fa:16:3e:e5:31:3f", "network": {"id": "c4d0bdd2-23f5-4062-a9f2-c5c372333fcf", "bridge": "br-int", "label": "tempest-network-smoke--459715102", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "838b1c7df82149408a85854af5a04909", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa273cb3-e8", "ovs_interfaceid": "aa273cb3-e801-441e-be4f-c5722f88c59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.009 257094 DEBUG nova.network.os_vif_util [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa273cb3-e801-441e-be4f-c5722f88c59c,network=Network(c4d0bdd2-23f5-4062-a9f2-c5c372333fcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa273cb3-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.012 257094 DEBUG nova.objects.instance [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lazy-loading 'pci_devices' on Instance uuid d5af7919-6b0b-4f37-9f5b-ed2b11e11a85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.031 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] End _get_guest_xml xml=<domain type="kvm">
Dec 05 10:17:20 compute-0 nova_compute[257087]:   <uuid>d5af7919-6b0b-4f37-9f5b-ed2b11e11a85</uuid>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   <name>instance-00000005</name>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   <memory>131072</memory>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   <vcpu>1</vcpu>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   <metadata>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <nova:name>tempest-TestNetworkBasicOps-server-2015202476</nova:name>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <nova:creationTime>2025-12-05 10:17:18</nova:creationTime>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <nova:flavor name="m1.nano">
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <nova:memory>128</nova:memory>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <nova:disk>1</nova:disk>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <nova:swap>0</nova:swap>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <nova:vcpus>1</nova:vcpus>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       </nova:flavor>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <nova:owner>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <nova:user uuid="769d2179358946d682e622908baeec49">tempest-TestNetworkBasicOps-410983719-project-member</nova:user>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <nova:project uuid="838b1c7df82149408a85854af5a04909">tempest-TestNetworkBasicOps-410983719</nova:project>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       </nova:owner>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <nova:root type="image" uuid="4a6d0006-e2d8-47cd-a44b-309518215a42"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <nova:ports>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <nova:port uuid="aa273cb3-e801-441e-be4f-c5722f88c59c">
Dec 05 10:17:20 compute-0 nova_compute[257087]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         </nova:port>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       </nova:ports>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     </nova:instance>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   </metadata>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   <sysinfo type="smbios">
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <system>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <entry name="manufacturer">RDO</entry>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <entry name="product">OpenStack Compute</entry>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <entry name="serial">d5af7919-6b0b-4f37-9f5b-ed2b11e11a85</entry>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <entry name="uuid">d5af7919-6b0b-4f37-9f5b-ed2b11e11a85</entry>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <entry name="family">Virtual Machine</entry>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     </system>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   </sysinfo>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   <os>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <boot dev="hd"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <smbios mode="sysinfo"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   </os>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   <features>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <acpi/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <apic/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <vmcoreinfo/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   </features>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   <clock offset="utc">
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <timer name="hpet" present="no"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   </clock>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   <cpu mode="host-model" match="exact">
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   </cpu>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   <devices>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <disk type="network" device="disk">
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <driver type="raw" cache="none"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <source protocol="rbd" name="vms/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk">
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <host name="192.168.122.100" port="6789"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <host name="192.168.122.102" port="6789"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <host name="192.168.122.101" port="6789"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       </source>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <auth username="openstack">
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <secret type="ceph" uuid="3c63ce0f-5206-59ae-8381-b67d0b6424b5"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       </auth>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <target dev="vda" bus="virtio"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     </disk>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <disk type="network" device="cdrom">
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <driver type="raw" cache="none"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <source protocol="rbd" name="vms/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk.config">
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <host name="192.168.122.100" port="6789"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <host name="192.168.122.102" port="6789"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <host name="192.168.122.101" port="6789"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       </source>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <auth username="openstack">
Dec 05 10:17:20 compute-0 nova_compute[257087]:         <secret type="ceph" uuid="3c63ce0f-5206-59ae-8381-b67d0b6424b5"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       </auth>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <target dev="sda" bus="sata"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     </disk>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <interface type="ethernet">
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <mac address="fa:16:3e:e5:31:3f"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <model type="virtio"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <mtu size="1442"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <target dev="tapaa273cb3-e8"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     </interface>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <serial type="pty">
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <log file="/var/lib/nova/instances/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85/console.log" append="off"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     </serial>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <video>
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <model type="virtio"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     </video>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <input type="tablet" bus="usb"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <rng model="virtio">
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <backend model="random">/dev/urandom</backend>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     </rng>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <controller type="usb" index="0"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     <memballoon model="virtio">
Dec 05 10:17:20 compute-0 nova_compute[257087]:       <stats period="10"/>
Dec 05 10:17:20 compute-0 nova_compute[257087]:     </memballoon>
Dec 05 10:17:20 compute-0 nova_compute[257087]:   </devices>
Dec 05 10:17:20 compute-0 nova_compute[257087]: </domain>
Dec 05 10:17:20 compute-0 nova_compute[257087]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.033 257094 DEBUG nova.compute.manager [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Preparing to wait for external event network-vif-plugged-aa273cb3-e801-441e-be4f-c5722f88c59c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.033 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Acquiring lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.033 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.034 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.034 257094 DEBUG nova.virt.libvirt.vif [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T10:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2015202476',display_name='tempest-TestNetworkBasicOps-server-2015202476',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2015202476',id=5,image_ref='4a6d0006-e2d8-47cd-a44b-309518215a42',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBERgToEyV2W8KKM4rKMwDAoJ1Hw78zzJ0gcjKRolKcqupsdl1NMGZNiHIOWKfB7s8QyL+/5bbhT6Fx7YkgeXNC08RMQY+TxJd2lkJgkLysEUh0JEcMaGRFjc7I4wY0ZcSA==',key_name='tempest-TestNetworkBasicOps-1454695835',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='838b1c7df82149408a85854af5a04909',ramdisk_id='',reservation_id='r-cz2ocla0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4a6d0006-e2d8-47cd-a44b-309518215a42',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-410983719',owner_user_name='tempest-TestNetworkBasicOps-410983719-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T10:17:10Z,user_data=None,user_id='769d2179358946d682e622908baeec49',uuid=d5af7919-6b0b-4f37-9f5b-ed2b11e11a85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa273cb3-e801-441e-be4f-c5722f88c59c", "address": "fa:16:3e:e5:31:3f", "network": {"id": "c4d0bdd2-23f5-4062-a9f2-c5c372333fcf", "bridge": "br-int", "label": "tempest-network-smoke--459715102", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "838b1c7df82149408a85854af5a04909", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa273cb3-e8", "ovs_interfaceid": "aa273cb3-e801-441e-be4f-c5722f88c59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.035 257094 DEBUG nova.network.os_vif_util [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Converting VIF {"id": "aa273cb3-e801-441e-be4f-c5722f88c59c", "address": "fa:16:3e:e5:31:3f", "network": {"id": "c4d0bdd2-23f5-4062-a9f2-c5c372333fcf", "bridge": "br-int", "label": "tempest-network-smoke--459715102", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "838b1c7df82149408a85854af5a04909", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa273cb3-e8", "ovs_interfaceid": "aa273cb3-e801-441e-be4f-c5722f88c59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.035 257094 DEBUG nova.network.os_vif_util [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa273cb3-e801-441e-be4f-c5722f88c59c,network=Network(c4d0bdd2-23f5-4062-a9f2-c5c372333fcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa273cb3-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.036 257094 DEBUG os_vif [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa273cb3-e801-441e-be4f-c5722f88c59c,network=Network(c4d0bdd2-23f5-4062-a9f2-c5c372333fcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa273cb3-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.090 257094 DEBUG ovsdbapp.backend.ovs_idl [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.091 257094 DEBUG ovsdbapp.backend.ovs_idl [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.091 257094 DEBUG ovsdbapp.backend.ovs_idl [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.091 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.092 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.092 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.093 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.095 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.097 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.107 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.108 257094 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.108 257094 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.110 257094 INFO oslo.privsep.daemon [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpubu06at8/privsep.sock']
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.338 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.344 257094 DEBUG nova.network.neutron [req-30452dd6-45d5-4b85-bdc0-573b6fc624d7 req-2464f491-204a-4230-972c-5b4b081f746b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Updated VIF entry in instance network info cache for port aa273cb3-e801-441e-be4f-c5722f88c59c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.345 257094 DEBUG nova.network.neutron [req-30452dd6-45d5-4b85-bdc0-573b6fc624d7 req-2464f491-204a-4230-972c-5b4b081f746b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Updating instance_info_cache with network_info: [{"id": "aa273cb3-e801-441e-be4f-c5722f88c59c", "address": "fa:16:3e:e5:31:3f", "network": {"id": "c4d0bdd2-23f5-4062-a9f2-c5c372333fcf", "bridge": "br-int", "label": "tempest-network-smoke--459715102", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "838b1c7df82149408a85854af5a04909", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa273cb3-e8", "ovs_interfaceid": "aa273cb3-e801-441e-be4f-c5722f88c59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:17:20 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.363 257094 DEBUG oslo_concurrency.lockutils [req-30452dd6-45d5-4b85-bdc0-573b6fc624d7 req-2464f491-204a-4230-972c-5b4b081f746b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Releasing lock "refresh_cache-d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:17:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:20.575 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:20.575 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:20.575 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.056 257094 INFO oslo.privsep.daemon [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Spawned new privsep daemon via rootwrap
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:20.863 268749 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.136 268749 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.138 268749 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.139 268749 INFO oslo.privsep.daemon [-] privsep daemon running as pid 268749
Dec 05 10:17:21 compute-0 ceph-mon[74418]: pgmap v845: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Dec 05 10:17:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3583325864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:17:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:21.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.515 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.516 257094 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa273cb3-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.517 257094 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa273cb3-e8, col_values=(('external_ids', {'iface-id': 'aa273cb3-e801-441e-be4f-c5722f88c59c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:31:3f', 'vm-uuid': 'd5af7919-6b0b-4f37-9f5b-ed2b11e11a85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.521 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:17:21 compute-0 NetworkManager[48957]: <info>  [1764929841.5241] manager: (tapaa273cb3-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.529 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.530 257094 INFO os_vif [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa273cb3-e801-441e-be4f-c5722f88c59c,network=Network(c4d0bdd2-23f5-4062-a9f2-c5c372333fcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa273cb3-e8')
Dec 05 10:17:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:21.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.624 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.625 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.625 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] No VIF found with MAC fa:16:3e:e5:31:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.626 257094 INFO nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Using config drive
Dec 05 10:17:21 compute-0 nova_compute[257087]: 2025-12-05 10:17:21.658 257094 DEBUG nova.storage.rbd_utils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] rbd image d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 05 10:17:22 compute-0 ceph-mon[74418]: pgmap v846: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 05 10:17:22 compute-0 nova_compute[257087]: 2025-12-05 10:17:22.536 257094 INFO nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Creating config drive at /var/lib/nova/instances/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85/disk.config
Dec 05 10:17:22 compute-0 nova_compute[257087]: 2025-12-05 10:17:22.542 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikzdo499 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:22 compute-0 nova_compute[257087]: 2025-12-05 10:17:22.676 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikzdo499" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:22 compute-0 nova_compute[257087]: 2025-12-05 10:17:22.713 257094 DEBUG nova.storage.rbd_utils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] rbd image d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:22 compute-0 nova_compute[257087]: 2025-12-05 10:17:22.718 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85/disk.config d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:22 compute-0 nova_compute[257087]: 2025-12-05 10:17:22.898 257094 DEBUG oslo_concurrency.processutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85/disk.config d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:22 compute-0 nova_compute[257087]: 2025-12-05 10:17:22.900 257094 INFO nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Deleting local config drive /var/lib/nova/instances/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85/disk.config because it was imported into RBD.
Dec 05 10:17:22 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 05 10:17:22 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 05 10:17:23 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 05 10:17:23 compute-0 NetworkManager[48957]: <info>  [1764929843.0609] manager: (tapaa273cb3-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Dec 05 10:17:23 compute-0 kernel: tapaa273cb3-e8: entered promiscuous mode
Dec 05 10:17:23 compute-0 ovn_controller[154822]: 2025-12-05T10:17:23Z|00027|binding|INFO|Claiming lport aa273cb3-e801-441e-be4f-c5722f88c59c for this chassis.
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.065 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:23 compute-0 ovn_controller[154822]: 2025-12-05T10:17:23Z|00028|binding|INFO|aa273cb3-e801-441e-be4f-c5722f88c59c: Claiming fa:16:3e:e5:31:3f 10.100.0.4
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.071 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:23 compute-0 systemd-udevd[268848]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:17:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:23.087 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:31:3f 10.100.0.4'], port_security=['fa:16:3e:e5:31:3f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd5af7919-6b0b-4f37-9f5b-ed2b11e11a85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '838b1c7df82149408a85854af5a04909', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7dc2076d-6a4b-4522-8174-d85e29ec45d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ea83c94-ac7d-40f6-95c6-8524308b417c, chassis=[<ovs.db.idl.Row object at 0x7ffbdf76c910>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ffbdf76c910>], logical_port=aa273cb3-e801-441e-be4f-c5722f88c59c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:17:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:23.088 165250 INFO neutron.agent.ovn.metadata.agent [-] Port aa273cb3-e801-441e-be4f-c5722f88c59c in datapath c4d0bdd2-23f5-4062-a9f2-c5c372333fcf bound to our chassis
Dec 05 10:17:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:23.090 165250 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d0bdd2-23f5-4062-a9f2-c5c372333fcf
Dec 05 10:17:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:23.092 165250 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpggxucx3t/privsep.sock']
Dec 05 10:17:23 compute-0 NetworkManager[48957]: <info>  [1764929843.1234] device (tapaa273cb3-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 10:17:23 compute-0 NetworkManager[48957]: <info>  [1764929843.1255] device (tapaa273cb3-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 10:17:23 compute-0 systemd-machined[217607]: New machine qemu-1-instance-00000005.
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.183 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:23 compute-0 ovn_controller[154822]: 2025-12-05T10:17:23Z|00029|binding|INFO|Setting lport aa273cb3-e801-441e-be4f-c5722f88c59c ovn-installed in OVS
Dec 05 10:17:23 compute-0 ovn_controller[154822]: 2025-12-05T10:17:23Z|00030|binding|INFO|Setting lport aa273cb3-e801-441e-be4f-c5722f88c59c up in Southbound
Dec 05 10:17:23 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000005.
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.191 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:23.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.542 257094 DEBUG nova.compute.manager [req-af5ef3ba-3aa8-43af-b4e7-bff903f9d89f req-76ba54ae-4480-4875-9c90-54135de5c938 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Received event network-vif-plugged-aa273cb3-e801-441e-be4f-c5722f88c59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.542 257094 DEBUG oslo_concurrency.lockutils [req-af5ef3ba-3aa8-43af-b4e7-bff903f9d89f req-76ba54ae-4480-4875-9c90-54135de5c938 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Acquiring lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.543 257094 DEBUG oslo_concurrency.lockutils [req-af5ef3ba-3aa8-43af-b4e7-bff903f9d89f req-76ba54ae-4480-4875-9c90-54135de5c938 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.543 257094 DEBUG oslo_concurrency.lockutils [req-af5ef3ba-3aa8-43af-b4e7-bff903f9d89f req-76ba54ae-4480-4875-9c90-54135de5c938 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.543 257094 DEBUG nova.compute.manager [req-af5ef3ba-3aa8-43af-b4e7-bff903f9d89f req-76ba54ae-4480-4875-9c90-54135de5c938 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Processing event network-vif-plugged-aa273cb3-e801-441e-be4f-c5722f88c59c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 10:17:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:23.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:23.687Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:17:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:23.687Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:17:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:23.688Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:17:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.832 257094 DEBUG nova.virt.driver [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Emitting event <LifecycleEvent: 1764929843.831782, d5af7919-6b0b-4f37-9f5b-ed2b11e11a85 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.833 257094 INFO nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] VM Started (Lifecycle Event)
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.836 257094 DEBUG nova.compute.manager [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.841 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.846 257094 INFO nova.virt.libvirt.driver [-] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Instance spawned successfully.
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.846 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 10:17:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:23.948 165250 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.950 257094 DEBUG nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:17:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:23.950 165250 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpggxucx3t/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 10:17:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:23.802 268908 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:17:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:23.813 268908 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:17:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:23.816 268908 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 05 10:17:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:23.816 268908 INFO oslo.privsep.daemon [-] privsep daemon running as pid 268908
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.955 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.955 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:23 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:23.955 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbee022-8239-401e-8af5-bf40e2b1012b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.956 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.956 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.957 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.957 257094 DEBUG nova.virt.libvirt.driver [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:23 compute-0 nova_compute[257087]: 2025-12-05 10:17:23.962 257094 DEBUG nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 10:17:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:24 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:24 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.000 257094 INFO nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.001 257094 DEBUG nova.virt.driver [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Emitting event <LifecycleEvent: 1764929843.8320649, d5af7919-6b0b-4f37-9f5b-ed2b11e11a85 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.001 257094 INFO nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] VM Paused (Lifecycle Event)
Dec 05 10:17:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:24 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.056 257094 DEBUG nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.062 257094 DEBUG nova.virt.driver [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] Emitting event <LifecycleEvent: 1764929843.8414514, d5af7919-6b0b-4f37-9f5b-ed2b11e11a85 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.063 257094 INFO nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] VM Resumed (Lifecycle Event)
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.092 257094 INFO nova.compute.manager [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Took 13.07 seconds to spawn the instance on the hypervisor.
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.093 257094 DEBUG nova.compute.manager [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.094 257094 DEBUG nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.103 257094 DEBUG nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.144 257094 INFO nova.compute.manager [None req-4c2ca4f9-d9ef-43a4-9d23-d40db7e6fd0c - - - - - -] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.175 257094 INFO nova.compute.manager [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Took 14.37 seconds to build instance.
Dec 05 10:17:24 compute-0 nova_compute[257087]: 2025-12-05 10:17:24.260 257094 DEBUG oslo_concurrency.lockutils [None req-ab21c482-ba79-495b-93fc-c442f4d6d2c6 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:24 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:24.655 268908 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:24 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:24.655 268908 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:24 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:24.655 268908 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:24 compute-0 ceph-mon[74418]: pgmap v847: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec 05 10:17:25 compute-0 nova_compute[257087]: 2025-12-05 10:17:25.367 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:25.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:25 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:25.580 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[2a488b63-2a10-4d12-9e86-9a618c33b22c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:25 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:25.582 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4d0bdd2-21 in ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 10:17:25 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:25.585 268908 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4d0bdd2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 10:17:25 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:25.585 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[43c41c82-43ca-463a-afd3-8c5581c54acc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:25 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:25.588 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[a93c4945-a365-4df0-a760-024b52e8af8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:25.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:25 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:25.619 165514 DEBUG oslo.privsep.daemon [-] privsep: reply[4642b2ae-b50c-4843-8664-f3c6ce642496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:25 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:25.637 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[87829ec7-c0d1-4dce-8028-9b9f3cb0fef7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:25 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:25.639 165250 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp8uebcisv/privsep.sock']
Dec 05 10:17:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:25] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:17:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:25] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:17:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 708 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 05 10:17:26 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:26.435 165250 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 10:17:26 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:26.436 165250 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp8uebcisv/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 10:17:26 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:26.277 268925 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:17:26 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:26.283 268925 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:17:26 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:26.286 268925 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 05 10:17:26 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:26.286 268925 INFO oslo.privsep.daemon [-] privsep daemon running as pid 268925
Dec 05 10:17:26 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:26.438 268925 DEBUG oslo.privsep.daemon [-] privsep: reply[9099076c-cb18-4210-a7a2-dd419d845b6f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 compute-0 nova_compute[257087]: 2025-12-05 10:17:26.520 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:26 compute-0 nova_compute[257087]: 2025-12-05 10:17:26.560 257094 DEBUG nova.compute.manager [req-60edce0a-4fe7-47c5-8987-5357ab99272d req-eeacfc3f-e19c-49be-94dc-62eec2bc192b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Received event network-vif-plugged-aa273cb3-e801-441e-be4f-c5722f88c59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:17:26 compute-0 nova_compute[257087]: 2025-12-05 10:17:26.560 257094 DEBUG oslo_concurrency.lockutils [req-60edce0a-4fe7-47c5-8987-5357ab99272d req-eeacfc3f-e19c-49be-94dc-62eec2bc192b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Acquiring lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:26 compute-0 nova_compute[257087]: 2025-12-05 10:17:26.560 257094 DEBUG oslo_concurrency.lockutils [req-60edce0a-4fe7-47c5-8987-5357ab99272d req-eeacfc3f-e19c-49be-94dc-62eec2bc192b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:26 compute-0 nova_compute[257087]: 2025-12-05 10:17:26.560 257094 DEBUG oslo_concurrency.lockutils [req-60edce0a-4fe7-47c5-8987-5357ab99272d req-eeacfc3f-e19c-49be-94dc-62eec2bc192b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:26 compute-0 nova_compute[257087]: 2025-12-05 10:17:26.561 257094 DEBUG nova.compute.manager [req-60edce0a-4fe7-47c5-8987-5357ab99272d req-eeacfc3f-e19c-49be-94dc-62eec2bc192b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] No waiting events found dispatching network-vif-plugged-aa273cb3-e801-441e-be4f-c5722f88c59c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:17:26 compute-0 nova_compute[257087]: 2025-12-05 10:17:26.561 257094 WARNING nova.compute.manager [req-60edce0a-4fe7-47c5-8987-5357ab99272d req-eeacfc3f-e19c-49be-94dc-62eec2bc192b c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Received unexpected event network-vif-plugged-aa273cb3-e801-441e-be4f-c5722f88c59c for instance with vm_state active and task_state None.
Dec 05 10:17:26 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:26.939 268925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:26 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:26.939 268925 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:26 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:26.939 268925 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:27 compute-0 ceph-mon[74418]: pgmap v848: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 708 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 05 10:17:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:27.387Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:17:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:27.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.576 268925 DEBUG oslo.privsep.daemon [-] privsep: reply[361a93be-c9c4-414a-893f-daad52aed3ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:17:27
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.meta', '.nfs', 'backups', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec 05 10:17:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:17:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.604 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[7aac6596-beef-4763-abaa-54489cc3fb0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:27 compute-0 NetworkManager[48957]: <info>  [1764929847.6068] manager: (tapc4d0bdd2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Dec 05 10:17:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:27.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:27 compute-0 systemd-udevd[268938]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.639 268925 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3cb6b6-bb7a-4ba6-b2cc-6ec6df20d90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.643 268925 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6bd54e-c191-4bea-98b1-de39f07d7117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:17:27 compute-0 NetworkManager[48957]: <info>  [1764929847.6728] device (tapc4d0bdd2-20): carrier: link connected
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.678 268925 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6750d4-3ed3-490c-9b06-f0cc3c01f5f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.703 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[628e062d-f87b-4fd9-bcbf-de52c75899ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d0bdd2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:df:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461769, 'reachable_time': 32878, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268957, 'error': None, 'target': 'ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.722 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[39706dec-03d5-488c-a6c2-1024c5ec7b3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:dfc0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461769, 'tstamp': 461769}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268958, 'error': None, 'target': 'ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.739 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[5182b496-2c93-4f22-8b65-596c20d8a980]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d0bdd2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:df:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461769, 'reachable_time': 32878, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268959, 'error': None, 'target': 'ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.779 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b7a5de-cac6-46b8-a8d4-f87a4dd1f62f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.842 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[08781e26-1394-4693-bb55-096d895a6e80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.844 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d0bdd2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.845 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 10:17:27 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:27.845 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d0bdd2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011080044930650518 of space, bias 1.0, pg target 0.33240134791951553 quantized to 32 (current 32)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:17:27 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:17:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:17:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:17:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:17:28 compute-0 nova_compute[257087]: 2025-12-05 10:17:28.005 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:28 compute-0 kernel: tapc4d0bdd2-20: entered promiscuous mode
Dec 05 10:17:28 compute-0 NetworkManager[48957]: <info>  [1764929848.0096] manager: (tapc4d0bdd2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:28.010 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d0bdd2-20, col_values=(('external_ids', {'iface-id': '440d4620-f140-4cd0-9a68-d262802b1e26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:28 compute-0 nova_compute[257087]: 2025-12-05 10:17:28.011 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:28 compute-0 ovn_controller[154822]: 2025-12-05T10:17:28Z|00031|binding|INFO|Releasing lport 440d4620-f140-4cd0-9a68-d262802b1e26 from this chassis (sb_readonly=0)
Dec 05 10:17:28 compute-0 nova_compute[257087]: 2025-12-05 10:17:28.013 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:28.014 165250 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4d0bdd2-23f5-4062-a9f2-c5c372333fcf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4d0bdd2-23f5-4062-a9f2-c5c372333fcf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:28.015 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd312b6-f9ac-497e-8918-fbf404f7790a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:28.017 165250 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]: global
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     log         /dev/log local0 debug
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     log-tag     haproxy-metadata-proxy-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     user        root
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     group       root
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     maxconn     1024
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     pidfile     /var/lib/neutron/external/pids/c4d0bdd2-23f5-4062-a9f2-c5c372333fcf.pid.haproxy
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     daemon
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]: 
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]: defaults
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     log global
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     mode http
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     option httplog
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     option dontlognull
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     option http-server-close
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     option forwardfor
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     retries                 3
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     timeout http-request    30s
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     timeout connect         30s
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     timeout client          32s
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     timeout server          32s
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     timeout http-keep-alive 30s
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]: 
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]: 
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]: listen listener
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     bind 169.254.169.254:80
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:     http-request add-header X-OVN-Network-ID c4d0bdd2-23f5-4062-a9f2-c5c372333fcf
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 10:17:28 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:28.018 165250 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf', 'env', 'PROCESS_TAG=haproxy-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4d0bdd2-23f5-4062-a9f2-c5c372333fcf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 10:17:28 compute-0 nova_compute[257087]: 2025-12-05 10:17:28.058 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:17:28 compute-0 ceph-mon[74418]: pgmap v849: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 05 10:17:28 compute-0 podman[268995]: 2025-12-05 10:17:28.492807189 +0000 UTC m=+0.049365572 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 05 10:17:28 compute-0 podman[268995]: 2025-12-05 10:17:28.923592417 +0000 UTC m=+0.480150730 container create 8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:17:28 compute-0 systemd[1]: Started libpod-conmon-8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a.scope.
Dec 05 10:17:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:29 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:29 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:29 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:29 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60a11c058f5f2e8dd0f52ac4197caacd6cc32cf3fa6aed40fa2b7e262d0c083a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:29 compute-0 podman[268995]: 2025-12-05 10:17:29.176783049 +0000 UTC m=+0.733341382 container init 8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:17:29 compute-0 podman[268995]: 2025-12-05 10:17:29.188500427 +0000 UTC m=+0.745058720 container start 8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:17:29 compute-0 neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf[269011]: [NOTICE]   (269015) : New worker (269017) forked
Dec 05 10:17:29 compute-0 neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf[269011]: [NOTICE]   (269015) : Loading success.
Dec 05 10:17:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:29.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:29 compute-0 ovn_controller[154822]: 2025-12-05T10:17:29Z|00032|binding|INFO|Releasing lport 440d4620-f140-4cd0-9a68-d262802b1e26 from this chassis (sb_readonly=0)
Dec 05 10:17:29 compute-0 nova_compute[257087]: 2025-12-05 10:17:29.581 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:29 compute-0 NetworkManager[48957]: <info>  [1764929849.5871] manager: (patch-provnet-4733f83c-d091-4ff4-b60c-6ae6c11d8975-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Dec 05 10:17:29 compute-0 NetworkManager[48957]: <info>  [1764929849.5881] device (patch-provnet-4733f83c-d091-4ff4-b60c-6ae6c11d8975-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 10:17:29 compute-0 NetworkManager[48957]: <info>  [1764929849.5904] manager: (patch-br-int-to-provnet-4733f83c-d091-4ff4-b60c-6ae6c11d8975): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Dec 05 10:17:29 compute-0 NetworkManager[48957]: <info>  [1764929849.5908] device (patch-br-int-to-provnet-4733f83c-d091-4ff4-b60c-6ae6c11d8975)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 10:17:29 compute-0 NetworkManager[48957]: <info>  [1764929849.5921] manager: (patch-provnet-4733f83c-d091-4ff4-b60c-6ae6c11d8975-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec 05 10:17:29 compute-0 NetworkManager[48957]: <info>  [1764929849.5927] manager: (patch-br-int-to-provnet-4733f83c-d091-4ff4-b60c-6ae6c11d8975): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Dec 05 10:17:29 compute-0 NetworkManager[48957]: <info>  [1764929849.5935] device (patch-provnet-4733f83c-d091-4ff4-b60c-6ae6c11d8975-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 05 10:17:29 compute-0 NetworkManager[48957]: <info>  [1764929849.5939] device (patch-br-int-to-provnet-4733f83c-d091-4ff4-b60c-6ae6c11d8975)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 05 10:17:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:29.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:29 compute-0 ovn_controller[154822]: 2025-12-05T10:17:29Z|00033|binding|INFO|Releasing lport 440d4620-f140-4cd0-9a68-d262802b1e26 from this chassis (sb_readonly=0)
Dec 05 10:17:29 compute-0 nova_compute[257087]: 2025-12-05 10:17:29.619 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:29 compute-0 nova_compute[257087]: 2025-12-05 10:17:29.623 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 217 op/s
Dec 05 10:17:29 compute-0 nova_compute[257087]: 2025-12-05 10:17:29.860 257094 DEBUG nova.compute.manager [req-8f4d5162-2f07-438a-af90-4372b5b2fa5a req-286ceada-2f2b-41de-9d80-ea6bcc5cd7fa c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Received event network-changed-aa273cb3-e801-441e-be4f-c5722f88c59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:17:29 compute-0 nova_compute[257087]: 2025-12-05 10:17:29.860 257094 DEBUG nova.compute.manager [req-8f4d5162-2f07-438a-af90-4372b5b2fa5a req-286ceada-2f2b-41de-9d80-ea6bcc5cd7fa c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Refreshing instance network info cache due to event network-changed-aa273cb3-e801-441e-be4f-c5722f88c59c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 10:17:29 compute-0 nova_compute[257087]: 2025-12-05 10:17:29.861 257094 DEBUG oslo_concurrency.lockutils [req-8f4d5162-2f07-438a-af90-4372b5b2fa5a req-286ceada-2f2b-41de-9d80-ea6bcc5cd7fa c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Acquiring lock "refresh_cache-d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:17:29 compute-0 nova_compute[257087]: 2025-12-05 10:17:29.861 257094 DEBUG oslo_concurrency.lockutils [req-8f4d5162-2f07-438a-af90-4372b5b2fa5a req-286ceada-2f2b-41de-9d80-ea6bcc5cd7fa c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Acquired lock "refresh_cache-d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:17:29 compute-0 nova_compute[257087]: 2025-12-05 10:17:29.862 257094 DEBUG nova.network.neutron [req-8f4d5162-2f07-438a-af90-4372b5b2fa5a req-286ceada-2f2b-41de-9d80-ea6bcc5cd7fa c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Refreshing network info cache for port aa273cb3-e801-441e-be4f-c5722f88c59c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 10:17:30 compute-0 nova_compute[257087]: 2025-12-05 10:17:30.371 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:30 compute-0 ceph-mon[74418]: pgmap v850: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 217 op/s
Dec 05 10:17:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:17:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:31.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:17:31 compute-0 nova_compute[257087]: 2025-12-05 10:17:31.523 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:17:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:31.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:17:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 189 op/s
Dec 05 10:17:32 compute-0 nova_compute[257087]: 2025-12-05 10:17:32.733 257094 DEBUG nova.network.neutron [req-8f4d5162-2f07-438a-af90-4372b5b2fa5a req-286ceada-2f2b-41de-9d80-ea6bcc5cd7fa c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Updated VIF entry in instance network info cache for port aa273cb3-e801-441e-be4f-c5722f88c59c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 10:17:32 compute-0 nova_compute[257087]: 2025-12-05 10:17:32.735 257094 DEBUG nova.network.neutron [req-8f4d5162-2f07-438a-af90-4372b5b2fa5a req-286ceada-2f2b-41de-9d80-ea6bcc5cd7fa c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Updating instance_info_cache with network_info: [{"id": "aa273cb3-e801-441e-be4f-c5722f88c59c", "address": "fa:16:3e:e5:31:3f", "network": {"id": "c4d0bdd2-23f5-4062-a9f2-c5c372333fcf", "bridge": "br-int", "label": "tempest-network-smoke--459715102", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "838b1c7df82149408a85854af5a04909", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa273cb3-e8", "ovs_interfaceid": "aa273cb3-e801-441e-be4f-c5722f88c59c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:17:32 compute-0 nova_compute[257087]: 2025-12-05 10:17:32.760 257094 DEBUG oslo_concurrency.lockutils [req-8f4d5162-2f07-438a-af90-4372b5b2fa5a req-286ceada-2f2b-41de-9d80-ea6bcc5cd7fa c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Releasing lock "refresh_cache-d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:17:32 compute-0 sudo[269031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:17:32 compute-0 sudo[269031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:32 compute-0 sudo[269031]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:32 compute-0 ceph-mon[74418]: pgmap v851: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 189 op/s
Dec 05 10:17:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:33.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:33.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:33.689Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:17:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 190 op/s
Dec 05 10:17:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:34 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:34 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:34 compute-0 ceph-mon[74418]: pgmap v852: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 190 op/s
Dec 05 10:17:35 compute-0 nova_compute[257087]: 2025-12-05 10:17:35.411 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:35.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:35.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:35] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec 05 10:17:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:35] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec 05 10:17:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 180 op/s
Dec 05 10:17:36 compute-0 nova_compute[257087]: 2025-12-05 10:17:36.571 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:37 compute-0 ceph-mon[74418]: pgmap v853: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 180 op/s
Dec 05 10:17:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:37.389Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:17:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:37.389Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:17:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:37.389Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:17:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:17:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:37.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:17:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 426 B/s wr, 179 op/s
Dec 05 10:17:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:38 compute-0 ovn_controller[154822]: 2025-12-05T10:17:38Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:31:3f 10.100.0.4
Dec 05 10:17:38 compute-0 ovn_controller[154822]: 2025-12-05T10:17:38Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:31:3f 10.100.0.4
Dec 05 10:17:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:39 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:39 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:39 compute-0 ceph-mon[74418]: pgmap v854: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 426 B/s wr, 179 op/s
Dec 05 10:17:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:39.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 244 op/s
Dec 05 10:17:40 compute-0 nova_compute[257087]: 2025-12-05 10:17:40.413 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:40 compute-0 sudo[269067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:17:40 compute-0 sudo[269067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:40 compute-0 sudo[269067]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:40 compute-0 sudo[269092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:17:40 compute-0 sudo[269092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:41 compute-0 ceph-mon[74418]: pgmap v855: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 244 op/s
Dec 05 10:17:41 compute-0 sudo[269092]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:41 compute-0 nova_compute[257087]: 2025-12-05 10:17:41.574 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:41.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 05 10:17:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:17:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:17:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 10:17:43 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 10:17:43 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:17:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:17:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:17:43 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:17:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:17:43 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:17:43 compute-0 ceph-mon[74418]: pgmap v856: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 05 10:17:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:17:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:17:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:17:43 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:17:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:17:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:17:43 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:17:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:17:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:17:43 compute-0 sudo[269152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:17:43 compute-0 sudo[269152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:43 compute-0 sudo[269152]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:43 compute-0 sudo[269177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:17:43 compute-0 sudo[269177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:43.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:17:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:43.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:17:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:43.691Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:17:43 compute-0 podman[269244]: 2025-12-05 10:17:43.742601914 +0000 UTC m=+0.079789129 container create 91377770a5cbe369123767b13b2c9fbf649d4466592f987182d03f5bb3770de8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_merkle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 10:17:43 compute-0 podman[269244]: 2025-12-05 10:17:43.702622207 +0000 UTC m=+0.039809522 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:17:43 compute-0 systemd[1]: Started libpod-conmon-91377770a5cbe369123767b13b2c9fbf649d4466592f987182d03f5bb3770de8.scope.
Dec 05 10:17:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 05 10:17:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:17:43 compute-0 podman[269244]: 2025-12-05 10:17:43.872911066 +0000 UTC m=+0.210098381 container init 91377770a5cbe369123767b13b2c9fbf649d4466592f987182d03f5bb3770de8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 10:17:43 compute-0 podman[269244]: 2025-12-05 10:17:43.883721419 +0000 UTC m=+0.220908634 container start 91377770a5cbe369123767b13b2c9fbf649d4466592f987182d03f5bb3770de8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_merkle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:17:43 compute-0 podman[269244]: 2025-12-05 10:17:43.893911256 +0000 UTC m=+0.231098481 container attach 91377770a5cbe369123767b13b2c9fbf649d4466592f987182d03f5bb3770de8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_merkle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 10:17:43 compute-0 systemd[1]: libpod-91377770a5cbe369123767b13b2c9fbf649d4466592f987182d03f5bb3770de8.scope: Deactivated successfully.
Dec 05 10:17:43 compute-0 crazy_merkle[269260]: 167 167
Dec 05 10:17:43 compute-0 podman[269244]: 2025-12-05 10:17:43.899359674 +0000 UTC m=+0.236546899 container died 91377770a5cbe369123767b13b2c9fbf649d4466592f987182d03f5bb3770de8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:17:43 compute-0 conmon[269260]: conmon 91377770a5cbe3691237 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91377770a5cbe369123767b13b2c9fbf649d4466592f987182d03f5bb3770de8.scope/container/memory.events
Dec 05 10:17:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-978b47d9433217d6ab9486e991db5996baa94e5ca8003437266142edfcef55a0-merged.mount: Deactivated successfully.
Dec 05 10:17:43 compute-0 podman[269244]: 2025-12-05 10:17:43.952060846 +0000 UTC m=+0.289248051 container remove 91377770a5cbe369123767b13b2c9fbf649d4466592f987182d03f5bb3770de8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:17:43 compute-0 systemd[1]: libpod-conmon-91377770a5cbe369123767b13b2c9fbf649d4466592f987182d03f5bb3770de8.scope: Deactivated successfully.
Dec 05 10:17:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:44 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:44 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:44 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:44 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:44 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:44 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:17:44 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:17:44 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:17:44 compute-0 podman[269286]: 2025-12-05 10:17:44.156083381 +0000 UTC m=+0.058669305 container create 4cc06a24398e2a81cd6303b8e512995b993575bf7257b7e0e5ac75bfc6e85b06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:17:44 compute-0 systemd[1]: Started libpod-conmon-4cc06a24398e2a81cd6303b8e512995b993575bf7257b7e0e5ac75bfc6e85b06.scope.
Dec 05 10:17:44 compute-0 podman[269286]: 2025-12-05 10:17:44.127040111 +0000 UTC m=+0.029626025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:17:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d5bf1af5f6553b2db0fc56165984b6bdbbc5ce0f15b07be43cb90bd1c5cff1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d5bf1af5f6553b2db0fc56165984b6bdbbc5ce0f15b07be43cb90bd1c5cff1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d5bf1af5f6553b2db0fc56165984b6bdbbc5ce0f15b07be43cb90bd1c5cff1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d5bf1af5f6553b2db0fc56165984b6bdbbc5ce0f15b07be43cb90bd1c5cff1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d5bf1af5f6553b2db0fc56165984b6bdbbc5ce0f15b07be43cb90bd1c5cff1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:44 compute-0 podman[269286]: 2025-12-05 10:17:44.258077363 +0000 UTC m=+0.160663277 container init 4cc06a24398e2a81cd6303b8e512995b993575bf7257b7e0e5ac75bfc6e85b06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 10:17:44 compute-0 podman[269286]: 2025-12-05 10:17:44.267449537 +0000 UTC m=+0.170035421 container start 4cc06a24398e2a81cd6303b8e512995b993575bf7257b7e0e5ac75bfc6e85b06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldberg, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:17:44 compute-0 podman[269286]: 2025-12-05 10:17:44.270945263 +0000 UTC m=+0.173531167 container attach 4cc06a24398e2a81cd6303b8e512995b993575bf7257b7e0e5ac75bfc6e85b06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 10:17:44 compute-0 nova_compute[257087]: 2025-12-05 10:17:44.297 257094 INFO nova.compute.manager [None req-1cd127a8-f441-4b97-8d59-b1b95320c2cc 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Get console output
Dec 05 10:17:44 compute-0 nova_compute[257087]: 2025-12-05 10:17:44.307 257094 INFO oslo.privsep.daemon [None req-1cd127a8-f441-4b97-8d59-b1b95320c2cc 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpqrcjz475/privsep.sock']
Dec 05 10:17:44 compute-0 admiring_goldberg[269304]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:17:44 compute-0 admiring_goldberg[269304]: --> All data devices are unavailable
Dec 05 10:17:44 compute-0 systemd[1]: libpod-4cc06a24398e2a81cd6303b8e512995b993575bf7257b7e0e5ac75bfc6e85b06.scope: Deactivated successfully.
Dec 05 10:17:44 compute-0 conmon[269304]: conmon 4cc06a24398e2a81cd63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4cc06a24398e2a81cd6303b8e512995b993575bf7257b7e0e5ac75bfc6e85b06.scope/container/memory.events
Dec 05 10:17:44 compute-0 podman[269286]: 2025-12-05 10:17:44.645990636 +0000 UTC m=+0.548576530 container died 4cc06a24398e2a81cd6303b8e512995b993575bf7257b7e0e5ac75bfc6e85b06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 05 10:17:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4d5bf1af5f6553b2db0fc56165984b6bdbbc5ce0f15b07be43cb90bd1c5cff1-merged.mount: Deactivated successfully.
Dec 05 10:17:44 compute-0 podman[269286]: 2025-12-05 10:17:44.707102046 +0000 UTC m=+0.609687970 container remove 4cc06a24398e2a81cd6303b8e512995b993575bf7257b7e0e5ac75bfc6e85b06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 10:17:44 compute-0 systemd[1]: libpod-conmon-4cc06a24398e2a81cd6303b8e512995b993575bf7257b7e0e5ac75bfc6e85b06.scope: Deactivated successfully.
Dec 05 10:17:44 compute-0 sudo[269177]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:44 compute-0 podman[269324]: 2025-12-05 10:17:44.767063346 +0000 UTC m=+0.092551056 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:17:44 compute-0 podman[269327]: 2025-12-05 10:17:44.804555795 +0000 UTC m=+0.119921560 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 10:17:44 compute-0 sudo[269369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:17:44 compute-0 sudo[269369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:44 compute-0 sudo[269369]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:44 compute-0 sudo[269395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:17:44 compute-0 sudo[269395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:45 compute-0 ceph-mon[74418]: pgmap v857: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.110 257094 INFO oslo.privsep.daemon [None req-1cd127a8-f441-4b97-8d59-b1b95320c2cc 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Spawned new privsep daemon via rootwrap
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:44.951 269420 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:44.959 269420 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:44.962 269420 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:44.963 269420 INFO oslo.privsep.daemon [-] privsep daemon running as pid 269420
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.205 269420 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 05 10:17:45 compute-0 podman[269460]: 2025-12-05 10:17:45.345229559 +0000 UTC m=+0.056199808 container create d16e7d78471a2f2f3f152ed150bfdc010a66d51636e27c8854e7c2341499e416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_allen, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:17:45 compute-0 systemd[1]: Started libpod-conmon-d16e7d78471a2f2f3f152ed150bfdc010a66d51636e27c8854e7c2341499e416.scope.
Dec 05 10:17:45 compute-0 podman[269460]: 2025-12-05 10:17:45.313302802 +0000 UTC m=+0.024273041 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:17:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.417 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:45 compute-0 podman[269460]: 2025-12-05 10:17:45.435346708 +0000 UTC m=+0.146317017 container init d16e7d78471a2f2f3f152ed150bfdc010a66d51636e27c8854e7c2341499e416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_allen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:17:45 compute-0 podman[269460]: 2025-12-05 10:17:45.446797879 +0000 UTC m=+0.157768098 container start d16e7d78471a2f2f3f152ed150bfdc010a66d51636e27c8854e7c2341499e416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 10:17:45 compute-0 podman[269460]: 2025-12-05 10:17:45.450455709 +0000 UTC m=+0.161425928 container attach d16e7d78471a2f2f3f152ed150bfdc010a66d51636e27c8854e7c2341499e416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:17:45 compute-0 great_allen[269477]: 167 167
Dec 05 10:17:45 compute-0 systemd[1]: libpod-d16e7d78471a2f2f3f152ed150bfdc010a66d51636e27c8854e7c2341499e416.scope: Deactivated successfully.
Dec 05 10:17:45 compute-0 podman[269482]: 2025-12-05 10:17:45.49794262 +0000 UTC m=+0.026643095 container died d16e7d78471a2f2f3f152ed150bfdc010a66d51636e27c8854e7c2341499e416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_allen, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 10:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f84bd49ae2dc85dceec91a3f74398eb10db0f33b724e3e7fd4e229b9349a967-merged.mount: Deactivated successfully.
Dec 05 10:17:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:45.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:45 compute-0 podman[269482]: 2025-12-05 10:17:45.537229628 +0000 UTC m=+0.065930103 container remove d16e7d78471a2f2f3f152ed150bfdc010a66d51636e27c8854e7c2341499e416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 10:17:45 compute-0 systemd[1]: libpod-conmon-d16e7d78471a2f2f3f152ed150bfdc010a66d51636e27c8854e7c2341499e416.scope: Deactivated successfully.
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.572 257094 DEBUG oslo_concurrency.lockutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Acquiring lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.573 257094 DEBUG oslo_concurrency.lockutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.573 257094 DEBUG oslo_concurrency.lockutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Acquiring lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.573 257094 DEBUG oslo_concurrency.lockutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.574 257094 DEBUG oslo_concurrency.lockutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.575 257094 INFO nova.compute.manager [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Terminating instance
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.576 257094 DEBUG nova.compute.manager [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 10:17:45 compute-0 kernel: tapaa273cb3-e8 (unregistering): left promiscuous mode
Dec 05 10:17:45 compute-0 NetworkManager[48957]: <info>  [1764929865.6381] device (tapaa273cb3-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 10:17:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:45.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:45] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec 05 10:17:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:45] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.646 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:45 compute-0 ovn_controller[154822]: 2025-12-05T10:17:45Z|00034|binding|INFO|Releasing lport aa273cb3-e801-441e-be4f-c5722f88c59c from this chassis (sb_readonly=0)
Dec 05 10:17:45 compute-0 ovn_controller[154822]: 2025-12-05T10:17:45Z|00035|binding|INFO|Setting lport aa273cb3-e801-441e-be4f-c5722f88c59c down in Southbound
Dec 05 10:17:45 compute-0 ovn_controller[154822]: 2025-12-05T10:17:45Z|00036|binding|INFO|Removing iface tapaa273cb3-e8 ovn-installed in OVS
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.650 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:45 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:45.654 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:31:3f 10.100.0.4'], port_security=['fa:16:3e:e5:31:3f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd5af7919-6b0b-4f37-9f5b-ed2b11e11a85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '838b1c7df82149408a85854af5a04909', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7dc2076d-6a4b-4522-8174-d85e29ec45d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ea83c94-ac7d-40f6-95c6-8524308b417c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ffbdf76c910>], logical_port=aa273cb3-e801-441e-be4f-c5722f88c59c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ffbdf76c910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:17:45 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:45.656 165250 INFO neutron.agent.ovn.metadata.agent [-] Port aa273cb3-e801-441e-be4f-c5722f88c59c in datapath c4d0bdd2-23f5-4062-a9f2-c5c372333fcf unbound from our chassis
Dec 05 10:17:45 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:45.657 165250 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4d0bdd2-23f5-4062-a9f2-c5c372333fcf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:17:45 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:45.658 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[eded27d7-a8c1-4cc3-b322-22e79938a0c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:45 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:45.659 165250 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf namespace which is not needed anymore
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.668 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:45 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec 05 10:17:45 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000005.scope: Consumed 14.995s CPU time.
Dec 05 10:17:45 compute-0 systemd-machined[217607]: Machine qemu-1-instance-00000005 terminated.
Dec 05 10:17:45 compute-0 podman[269513]: 2025-12-05 10:17:45.728830675 +0000 UTC m=+0.044645445 container create e62cd3d90d5d4765c47639b7d3310cb2c130c3ccfde3b8b6d11d58dea1c9e077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pare, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:17:45 compute-0 systemd[1]: Started libpod-conmon-e62cd3d90d5d4765c47639b7d3310cb2c130c3ccfde3b8b6d11d58dea1c9e077.scope.
Dec 05 10:17:45 compute-0 podman[269513]: 2025-12-05 10:17:45.710208628 +0000 UTC m=+0.026023428 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:17:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 05 10:17:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.842 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:45 compute-0 neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf[269011]: [NOTICE]   (269015) : haproxy version is 2.8.14-c23fe91
Dec 05 10:17:45 compute-0 neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf[269011]: [NOTICE]   (269015) : path to executable is /usr/sbin/haproxy
Dec 05 10:17:45 compute-0 neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf[269011]: [WARNING]  (269015) : Exiting Master process...
Dec 05 10:17:45 compute-0 neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf[269011]: [ALERT]    (269015) : Current worker (269017) exited with code 143 (Terminated)
Dec 05 10:17:45 compute-0 neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf[269011]: [WARNING]  (269015) : All workers exited. Exiting... (0)
Dec 05 10:17:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727cc6e6b66ef7800232744fd4f1d25986d199fd6ef192f158ad3eb9abf441d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:45 compute-0 systemd[1]: libpod-8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a.scope: Deactivated successfully.
Dec 05 10:17:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727cc6e6b66ef7800232744fd4f1d25986d199fd6ef192f158ad3eb9abf441d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727cc6e6b66ef7800232744fd4f1d25986d199fd6ef192f158ad3eb9abf441d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727cc6e6b66ef7800232744fd4f1d25986d199fd6ef192f158ad3eb9abf441d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:45 compute-0 podman[269545]: 2025-12-05 10:17:45.8559721 +0000 UTC m=+0.092514716 container died 8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 05 10:17:45 compute-0 podman[269513]: 2025-12-05 10:17:45.866722142 +0000 UTC m=+0.182536942 container init e62cd3d90d5d4765c47639b7d3310cb2c130c3ccfde3b8b6d11d58dea1c9e077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.870 257094 INFO nova.virt.libvirt.driver [-] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Instance destroyed successfully.
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.871 257094 DEBUG nova.objects.instance [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lazy-loading 'resources' on Instance uuid d5af7919-6b0b-4f37-9f5b-ed2b11e11a85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:17:45 compute-0 podman[269513]: 2025-12-05 10:17:45.876491978 +0000 UTC m=+0.192306768 container start e62cd3d90d5d4765c47639b7d3310cb2c130c3ccfde3b8b6d11d58dea1c9e077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:17:45 compute-0 podman[269513]: 2025-12-05 10:17:45.888159935 +0000 UTC m=+0.203974715 container attach e62cd3d90d5d4765c47639b7d3310cb2c130c3ccfde3b8b6d11d58dea1c9e077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a-userdata-shm.mount: Deactivated successfully.
Dec 05 10:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-60a11c058f5f2e8dd0f52ac4197caacd6cc32cf3fa6aed40fa2b7e262d0c083a-merged.mount: Deactivated successfully.
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.902 257094 DEBUG nova.virt.libvirt.vif [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T10:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2015202476',display_name='tempest-TestNetworkBasicOps-server-2015202476',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2015202476',id=5,image_ref='4a6d0006-e2d8-47cd-a44b-309518215a42',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBERgToEyV2W8KKM4rKMwDAoJ1Hw78zzJ0gcjKRolKcqupsdl1NMGZNiHIOWKfB7s8QyL+/5bbhT6Fx7YkgeXNC08RMQY+TxJd2lkJgkLysEUh0JEcMaGRFjc7I4wY0ZcSA==',key_name='tempest-TestNetworkBasicOps-1454695835',keypairs=<?>,launch_index=0,launched_at=2025-12-05T10:17:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='838b1c7df82149408a85854af5a04909',ramdisk_id='',reservation_id='r-cz2ocla0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4a6d0006-e2d8-47cd-a44b-309518215a42',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-410983719',owner_user_name='tempest-TestNetworkBasicOps-410983719-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T10:17:24Z,user_data=None,user_id='769d2179358946d682e622908baeec49',uuid=d5af7919-6b0b-4f37-9f5b-ed2b11e11a85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa273cb3-e801-441e-be4f-c5722f88c59c", "address": "fa:16:3e:e5:31:3f", "network": {"id": "c4d0bdd2-23f5-4062-a9f2-c5c372333fcf", "bridge": "br-int", "label": "tempest-network-smoke--459715102", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "838b1c7df82149408a85854af5a04909", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa273cb3-e8", "ovs_interfaceid": "aa273cb3-e801-441e-be4f-c5722f88c59c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.902 257094 DEBUG nova.network.os_vif_util [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Converting VIF {"id": "aa273cb3-e801-441e-be4f-c5722f88c59c", "address": "fa:16:3e:e5:31:3f", "network": {"id": "c4d0bdd2-23f5-4062-a9f2-c5c372333fcf", "bridge": "br-int", "label": "tempest-network-smoke--459715102", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "838b1c7df82149408a85854af5a04909", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa273cb3-e8", "ovs_interfaceid": "aa273cb3-e801-441e-be4f-c5722f88c59c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.903 257094 DEBUG nova.network.os_vif_util [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa273cb3-e801-441e-be4f-c5722f88c59c,network=Network(c4d0bdd2-23f5-4062-a9f2-c5c372333fcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa273cb3-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.904 257094 DEBUG os_vif [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa273cb3-e801-441e-be4f-c5722f88c59c,network=Network(c4d0bdd2-23f5-4062-a9f2-c5c372333fcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa273cb3-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.906 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.907 257094 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa273cb3-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:45 compute-0 podman[269545]: 2025-12-05 10:17:45.909062093 +0000 UTC m=+0.145604699 container cleanup 8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.908 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.910 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.918 257094 INFO os_vif [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa273cb3-e801-441e-be4f-c5722f88c59c,network=Network(c4d0bdd2-23f5-4062-a9f2-c5c372333fcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa273cb3-e8')
Dec 05 10:17:45 compute-0 systemd[1]: libpod-conmon-8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a.scope: Deactivated successfully.
Dec 05 10:17:45 compute-0 podman[269592]: 2025-12-05 10:17:45.985888211 +0000 UTC m=+0.054310447 container remove 8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 10:17:45 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:45.990 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[047430f2-8eb6-4523-b735-8f25ef7b1dfa]: (4, ('Fri Dec  5 10:17:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf (8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a)\n8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a\nFri Dec  5 10:17:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf (8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a)\n8eebb1825e0fce808cdff5f0c2832d2b9c628c02580bdabb71e1e130f49cec7a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:45 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:45.993 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[e0606747-9f4e-4e0b-b0da-85f35659292a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:45 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:45.994 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d0bdd2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:45 compute-0 nova_compute[257087]: 2025-12-05 10:17:45.996 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:45 compute-0 kernel: tapc4d0bdd2-20: left promiscuous mode
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.010 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:46 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:46.014 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[32a6b433-37c5-4aee-a4a0-2fa74aef8105]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:46 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:46.032 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[c9589548-5880-458a-9266-1d42e5dd0fa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:46 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:46.033 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[c224802d-26b4-4ca9-853e-0ff1bc37902c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:46 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:46.047 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[079ec1aa-8c1b-4de2-a461-e188793742de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461759, 'reachable_time': 26320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269625, 'error': None, 'target': 'ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:46 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:46.057 165514 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4d0bdd2-23f5-4062-a9f2-c5c372333fcf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 10:17:46 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:46.059 165514 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e7bdf2-8f63-4925-82e2-2090bf662795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:46 compute-0 determined_pare[269555]: {
Dec 05 10:17:46 compute-0 determined_pare[269555]:     "1": [
Dec 05 10:17:46 compute-0 determined_pare[269555]:         {
Dec 05 10:17:46 compute-0 determined_pare[269555]:             "devices": [
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "/dev/loop3"
Dec 05 10:17:46 compute-0 determined_pare[269555]:             ],
Dec 05 10:17:46 compute-0 determined_pare[269555]:             "lv_name": "ceph_lv0",
Dec 05 10:17:46 compute-0 determined_pare[269555]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:17:46 compute-0 determined_pare[269555]:             "lv_size": "21470642176",
Dec 05 10:17:46 compute-0 determined_pare[269555]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:17:46 compute-0 determined_pare[269555]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:17:46 compute-0 determined_pare[269555]:             "name": "ceph_lv0",
Dec 05 10:17:46 compute-0 determined_pare[269555]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:17:46 compute-0 determined_pare[269555]:             "tags": {
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.cluster_name": "ceph",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.crush_device_class": "",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.encrypted": "0",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.osd_id": "1",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.type": "block",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.vdo": "0",
Dec 05 10:17:46 compute-0 determined_pare[269555]:                 "ceph.with_tpm": "0"
Dec 05 10:17:46 compute-0 determined_pare[269555]:             },
Dec 05 10:17:46 compute-0 determined_pare[269555]:             "type": "block",
Dec 05 10:17:46 compute-0 determined_pare[269555]:             "vg_name": "ceph_vg0"
Dec 05 10:17:46 compute-0 determined_pare[269555]:         }
Dec 05 10:17:46 compute-0 determined_pare[269555]:     ]
Dec 05 10:17:46 compute-0 determined_pare[269555]: }
Dec 05 10:17:46 compute-0 systemd[1]: libpod-e62cd3d90d5d4765c47639b7d3310cb2c130c3ccfde3b8b6d11d58dea1c9e077.scope: Deactivated successfully.
Dec 05 10:17:46 compute-0 podman[269632]: 2025-12-05 10:17:46.231205868 +0000 UTC m=+0.027206390 container died e62cd3d90d5d4765c47639b7d3310cb2c130c3ccfde3b8b6d11d58dea1c9e077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pare, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:17:46 compute-0 podman[269632]: 2025-12-05 10:17:46.345627908 +0000 UTC m=+0.141628420 container remove e62cd3d90d5d4765c47639b7d3310cb2c130c3ccfde3b8b6d11d58dea1c9e077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:17:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-727cc6e6b66ef7800232744fd4f1d25986d199fd6ef192f158ad3eb9abf441d8-merged.mount: Deactivated successfully.
Dec 05 10:17:46 compute-0 systemd[1]: run-netns-ovnmeta\x2dc4d0bdd2\x2d23f5\x2d4062\x2da9f2\x2dc5c372333fcf.mount: Deactivated successfully.
Dec 05 10:17:46 compute-0 systemd[1]: libpod-conmon-e62cd3d90d5d4765c47639b7d3310cb2c130c3ccfde3b8b6d11d58dea1c9e077.scope: Deactivated successfully.
Dec 05 10:17:46 compute-0 sudo[269395]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:46 compute-0 sudo[269649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:17:46 compute-0 sudo[269649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:46 compute-0 sudo[269649]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.529 257094 INFO nova.virt.libvirt.driver [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Deleting instance files /var/lib/nova/instances/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_del
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.531 257094 INFO nova.virt.libvirt.driver [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Deletion of /var/lib/nova/instances/d5af7919-6b0b-4f37-9f5b-ed2b11e11a85_del complete
Dec 05 10:17:46 compute-0 sudo[269674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:17:46 compute-0 sudo[269674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.596 257094 DEBUG nova.virt.libvirt.host [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.597 257094 INFO nova.virt.libvirt.host [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] UEFI support detected
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.601 257094 INFO nova.compute.manager [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Took 1.02 seconds to destroy the instance on the hypervisor.
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.602 257094 DEBUG oslo.service.loopingcall [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.602 257094 DEBUG nova.compute.manager [-] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.602 257094 DEBUG nova.network.neutron [-] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.614 257094 DEBUG nova.compute.manager [req-67e1da3a-2d33-4c46-8827-ee737c141ecc req-2275637f-e8a3-4630-bac1-1737f4b6d151 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Received event network-vif-unplugged-aa273cb3-e801-441e-be4f-c5722f88c59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.615 257094 DEBUG oslo_concurrency.lockutils [req-67e1da3a-2d33-4c46-8827-ee737c141ecc req-2275637f-e8a3-4630-bac1-1737f4b6d151 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Acquiring lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.615 257094 DEBUG oslo_concurrency.lockutils [req-67e1da3a-2d33-4c46-8827-ee737c141ecc req-2275637f-e8a3-4630-bac1-1737f4b6d151 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.615 257094 DEBUG oslo_concurrency.lockutils [req-67e1da3a-2d33-4c46-8827-ee737c141ecc req-2275637f-e8a3-4630-bac1-1737f4b6d151 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.616 257094 DEBUG nova.compute.manager [req-67e1da3a-2d33-4c46-8827-ee737c141ecc req-2275637f-e8a3-4630-bac1-1737f4b6d151 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] No waiting events found dispatching network-vif-unplugged-aa273cb3-e801-441e-be4f-c5722f88c59c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:17:46 compute-0 nova_compute[257087]: 2025-12-05 10:17:46.616 257094 DEBUG nova.compute.manager [req-67e1da3a-2d33-4c46-8827-ee737c141ecc req-2275637f-e8a3-4630-bac1-1737f4b6d151 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Received event network-vif-unplugged-aa273cb3-e801-441e-be4f-c5722f88c59c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 10:17:46 compute-0 podman[269740]: 2025-12-05 10:17:46.977797639 +0000 UTC m=+0.044437839 container create b3eedf764980a0ac8106de68699362575584d7dd8b91fc7bb5bd5c18cab28b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wu, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 10:17:47 compute-0 systemd[1]: Started libpod-conmon-b3eedf764980a0ac8106de68699362575584d7dd8b91fc7bb5bd5c18cab28b65.scope.
Dec 05 10:17:47 compute-0 podman[269740]: 2025-12-05 10:17:46.957325543 +0000 UTC m=+0.023965733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:17:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:17:47 compute-0 podman[269740]: 2025-12-05 10:17:47.084638403 +0000 UTC m=+0.151278623 container init b3eedf764980a0ac8106de68699362575584d7dd8b91fc7bb5bd5c18cab28b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wu, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:17:47 compute-0 ceph-mon[74418]: pgmap v858: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 05 10:17:47 compute-0 podman[269740]: 2025-12-05 10:17:47.098269703 +0000 UTC m=+0.164909883 container start b3eedf764980a0ac8106de68699362575584d7dd8b91fc7bb5bd5c18cab28b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:17:47 compute-0 podman[269740]: 2025-12-05 10:17:47.101930922 +0000 UTC m=+0.168571112 container attach b3eedf764980a0ac8106de68699362575584d7dd8b91fc7bb5bd5c18cab28b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:17:47 compute-0 thirsty_wu[269757]: 167 167
Dec 05 10:17:47 compute-0 systemd[1]: libpod-b3eedf764980a0ac8106de68699362575584d7dd8b91fc7bb5bd5c18cab28b65.scope: Deactivated successfully.
Dec 05 10:17:47 compute-0 podman[269740]: 2025-12-05 10:17:47.106183508 +0000 UTC m=+0.172823678 container died b3eedf764980a0ac8106de68699362575584d7dd8b91fc7bb5bd5c18cab28b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 10:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-12714ad78e4c6e4178f970fc7ef6ffeb348e50021fa428e76500ed02cb9a76c0-merged.mount: Deactivated successfully.
Dec 05 10:17:47 compute-0 podman[269740]: 2025-12-05 10:17:47.31451536 +0000 UTC m=+0.381155570 container remove b3eedf764980a0ac8106de68699362575584d7dd8b91fc7bb5bd5c18cab28b65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:17:47 compute-0 systemd[1]: libpod-conmon-b3eedf764980a0ac8106de68699362575584d7dd8b91fc7bb5bd5c18cab28b65.scope: Deactivated successfully.
Dec 05 10:17:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:47.390Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:17:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:47.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:47 compute-0 podman[269782]: 2025-12-05 10:17:47.539619717 +0000 UTC m=+0.049638429 container create ae28588d468fb25dee5c61e316d67e3644149b5aa3d125f1b695335e066c6a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_davinci, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:17:47 compute-0 systemd[1]: Started libpod-conmon-ae28588d468fb25dee5c61e316d67e3644149b5aa3d125f1b695335e066c6a70.scope.
Dec 05 10:17:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:17:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3241189efdf20774181a1f38674a99c58c6292fe8a51001af8c541ab30858761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3241189efdf20774181a1f38674a99c58c6292fe8a51001af8c541ab30858761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3241189efdf20774181a1f38674a99c58c6292fe8a51001af8c541ab30858761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:47 compute-0 podman[269782]: 2025-12-05 10:17:47.521345721 +0000 UTC m=+0.031364453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:17:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3241189efdf20774181a1f38674a99c58c6292fe8a51001af8c541ab30858761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:47.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:47 compute-0 podman[269782]: 2025-12-05 10:17:47.650974064 +0000 UTC m=+0.160992796 container init ae28588d468fb25dee5c61e316d67e3644149b5aa3d125f1b695335e066c6a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_davinci, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 10:17:47 compute-0 podman[269782]: 2025-12-05 10:17:47.658037377 +0000 UTC m=+0.168056089 container start ae28588d468fb25dee5c61e316d67e3644149b5aa3d125f1b695335e066c6a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 10:17:47 compute-0 podman[269782]: 2025-12-05 10:17:47.666347992 +0000 UTC m=+0.176366704 container attach ae28588d468fb25dee5c61e316d67e3644149b5aa3d125f1b695335e066c6a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 05 10:17:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 05 10:17:48 compute-0 ceph-mon[74418]: pgmap v859: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 05 10:17:48 compute-0 lvm[269872]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:17:48 compute-0 lvm[269872]: VG ceph_vg0 finished
Dec 05 10:17:48 compute-0 amazing_davinci[269798]: {}
Dec 05 10:17:48 compute-0 systemd[1]: libpod-ae28588d468fb25dee5c61e316d67e3644149b5aa3d125f1b695335e066c6a70.scope: Deactivated successfully.
Dec 05 10:17:48 compute-0 podman[269782]: 2025-12-05 10:17:48.399151278 +0000 UTC m=+0.909169990 container died ae28588d468fb25dee5c61e316d67e3644149b5aa3d125f1b695335e066c6a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 10:17:48 compute-0 systemd[1]: libpod-ae28588d468fb25dee5c61e316d67e3644149b5aa3d125f1b695335e066c6a70.scope: Consumed 1.152s CPU time.
Dec 05 10:17:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3241189efdf20774181a1f38674a99c58c6292fe8a51001af8c541ab30858761-merged.mount: Deactivated successfully.
Dec 05 10:17:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:48 compute-0 podman[269782]: 2025-12-05 10:17:48.484446337 +0000 UTC m=+0.994465049 container remove ae28588d468fb25dee5c61e316d67e3644149b5aa3d125f1b695335e066c6a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_davinci, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:17:48 compute-0 systemd[1]: libpod-conmon-ae28588d468fb25dee5c61e316d67e3644149b5aa3d125f1b695335e066c6a70.scope: Deactivated successfully.
Dec 05 10:17:48 compute-0 sudo[269674]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:17:48 compute-0 podman[269890]: 2025-12-05 10:17:48.605569078 +0000 UTC m=+0.078117024 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Dec 05 10:17:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:17:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.695 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:48 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:48.696 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:17:48 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:48.698 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:17:48 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:17:48.699 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.718 257094 DEBUG nova.network.neutron [-] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.721 257094 DEBUG nova.compute.manager [req-6bc76f86-8008-4965-92b5-0f364e27500f req-34d5e921-837f-47b8-be9c-de140ea15945 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Received event network-vif-plugged-aa273cb3-e801-441e-be4f-c5722f88c59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.722 257094 DEBUG oslo_concurrency.lockutils [req-6bc76f86-8008-4965-92b5-0f364e27500f req-34d5e921-837f-47b8-be9c-de140ea15945 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Acquiring lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.722 257094 DEBUG oslo_concurrency.lockutils [req-6bc76f86-8008-4965-92b5-0f364e27500f req-34d5e921-837f-47b8-be9c-de140ea15945 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.722 257094 DEBUG oslo_concurrency.lockutils [req-6bc76f86-8008-4965-92b5-0f364e27500f req-34d5e921-837f-47b8-be9c-de140ea15945 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.722 257094 DEBUG nova.compute.manager [req-6bc76f86-8008-4965-92b5-0f364e27500f req-34d5e921-837f-47b8-be9c-de140ea15945 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] No waiting events found dispatching network-vif-plugged-aa273cb3-e801-441e-be4f-c5722f88c59c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.723 257094 WARNING nova.compute.manager [req-6bc76f86-8008-4965-92b5-0f364e27500f req-34d5e921-837f-47b8-be9c-de140ea15945 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Received unexpected event network-vif-plugged-aa273cb3-e801-441e-be4f-c5722f88c59c for instance with vm_state active and task_state deleting.
Dec 05 10:17:48 compute-0 sudo[269917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:17:48 compute-0 sudo[269917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:48 compute-0 sudo[269917]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.780 257094 INFO nova.compute.manager [-] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Took 2.18 seconds to deallocate network for instance.
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.803 257094 DEBUG nova.compute.manager [req-55c17b2b-62ef-411f-b76e-0fb2967a1121 req-6f4101d3-dcf6-4504-91e5-47d95acacdd1 c469fac6ba424b9f8e93e288bb6b68e3 7e584ed639a34c89ac8bf811eca58ddb - - default default] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Received event network-vif-deleted-aa273cb3-e801-441e-be4f-c5722f88c59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.833 257094 DEBUG oslo_concurrency.lockutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.833 257094 DEBUG oslo_concurrency.lockutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:48 compute-0 nova_compute[257087]: 2025-12-05 10:17:48.885 257094 DEBUG oslo_concurrency.processutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:49 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:49 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:17:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587217293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:49 compute-0 nova_compute[257087]: 2025-12-05 10:17:49.351 257094 DEBUG oslo_concurrency.processutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:49 compute-0 nova_compute[257087]: 2025-12-05 10:17:49.361 257094 DEBUG nova.compute.provider_tree [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Updating inventory in ProviderTree for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:17:49 compute-0 nova_compute[257087]: 2025-12-05 10:17:49.425 257094 ERROR nova.scheduler.client.report [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] [req-39e5960c-ba8d-4cea-9854-66ac2d187c72] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID bad8518e-442e-4fc2-b7f3-2c453f1840d6.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-39e5960c-ba8d-4cea-9854-66ac2d187c72"}]}
Dec 05 10:17:49 compute-0 nova_compute[257087]: 2025-12-05 10:17:49.446 257094 DEBUG nova.scheduler.client.report [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Refreshing inventories for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:17:49 compute-0 nova_compute[257087]: 2025-12-05 10:17:49.473 257094 DEBUG nova.scheduler.client.report [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Updating ProviderTree inventory for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:17:49 compute-0 nova_compute[257087]: 2025-12-05 10:17:49.474 257094 DEBUG nova.compute.provider_tree [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Updating inventory in ProviderTree for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:17:49 compute-0 nova_compute[257087]: 2025-12-05 10:17:49.499 257094 DEBUG nova.scheduler.client.report [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Refreshing aggregate associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:17:49 compute-0 nova_compute[257087]: 2025-12-05 10:17:49.529 257094 DEBUG nova.scheduler.client.report [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Refreshing trait associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, traits: HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:17:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:49.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:49 compute-0 nova_compute[257087]: 2025-12-05 10:17:49.570 257094 DEBUG oslo_concurrency.processutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:49.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:17:49 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1587217293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Dec 05 10:17:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:17:50 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2361492650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:50 compute-0 nova_compute[257087]: 2025-12-05 10:17:50.049 257094 DEBUG oslo_concurrency.processutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:50 compute-0 nova_compute[257087]: 2025-12-05 10:17:50.057 257094 DEBUG nova.compute.provider_tree [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Updating inventory in ProviderTree for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:17:50 compute-0 nova_compute[257087]: 2025-12-05 10:17:50.103 257094 DEBUG nova.scheduler.client.report [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Updated inventory for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 05 10:17:50 compute-0 nova_compute[257087]: 2025-12-05 10:17:50.104 257094 DEBUG nova.compute.provider_tree [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Updating resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 05 10:17:50 compute-0 nova_compute[257087]: 2025-12-05 10:17:50.104 257094 DEBUG nova.compute.provider_tree [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Updating inventory in ProviderTree for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:17:50 compute-0 nova_compute[257087]: 2025-12-05 10:17:50.145 257094 DEBUG oslo_concurrency.lockutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:50 compute-0 nova_compute[257087]: 2025-12-05 10:17:50.189 257094 INFO nova.scheduler.client.report [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Deleted allocations for instance d5af7919-6b0b-4f37-9f5b-ed2b11e11a85
Dec 05 10:17:50 compute-0 nova_compute[257087]: 2025-12-05 10:17:50.275 257094 DEBUG oslo_concurrency.lockutils [None req-d101d905-bb29-49e7-b8a4-9c97c8b4cf16 769d2179358946d682e622908baeec49 838b1c7df82149408a85854af5a04909 - - default default] Lock "d5af7919-6b0b-4f37-9f5b-ed2b11e11a85" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:50 compute-0 nova_compute[257087]: 2025-12-05 10:17:50.420 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:50 compute-0 ceph-mon[74418]: pgmap v860: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Dec 05 10:17:50 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2361492650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:50 compute-0 nova_compute[257087]: 2025-12-05 10:17:50.908 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:51.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:51.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Dec 05 10:17:52 compute-0 nova_compute[257087]: 2025-12-05 10:17:52.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:52 compute-0 nova_compute[257087]: 2025-12-05 10:17:52.530 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 10:17:52 compute-0 ceph-mon[74418]: pgmap v861: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Dec 05 10:17:52 compute-0 nova_compute[257087]: 2025-12-05 10:17:52.552 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 10:17:53 compute-0 sudo[269990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:17:53 compute-0 sudo[269990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:17:53 compute-0 sudo[269990]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:53.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:53.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:53.691Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:17:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 65 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 13 KiB/s wr, 47 op/s
Dec 05 10:17:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:54 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:54 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:54 compute-0 nova_compute[257087]: 2025-12-05 10:17:54.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:54 compute-0 ceph-mon[74418]: pgmap v862: 353 pgs: 353 active+clean; 65 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 13 KiB/s wr, 47 op/s
Dec 05 10:17:55 compute-0 nova_compute[257087]: 2025-12-05 10:17:55.423 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:55.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:55] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec 05 10:17:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:17:55] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec 05 10:17:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:55.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:55 compute-0 nova_compute[257087]: 2025-12-05 10:17:55.824 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:55 compute-0 nova_compute[257087]: 2025-12-05 10:17:55.824 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 10:17:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 41 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Dec 05 10:17:55 compute-0 nova_compute[257087]: 2025-12-05 10:17:55.910 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:57 compute-0 ceph-mon[74418]: pgmap v863: 353 pgs: 353 active+clean; 41 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Dec 05 10:17:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/519496824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:17:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:17:57.393Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:17:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:57.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:17:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:17:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:17:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:57.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:17:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:17:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:17:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:17:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:17:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:17:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:17:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 41 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Dec 05 10:17:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2908043905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:17:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2908043905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:17:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:17:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:17:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:59 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:17:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:59 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:17:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:17:59 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:17:59 compute-0 ceph-mon[74418]: pgmap v864: 353 pgs: 353 active+clean; 41 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Dec 05 10:17:59 compute-0 nova_compute[257087]: 2025-12-05 10:17:59.545 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:59 compute-0 nova_compute[257087]: 2025-12-05 10:17:59.546 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:17:59.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:17:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:17:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:17:59.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:17:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Dec 05 10:18:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4251598331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:18:00 compute-0 nova_compute[257087]: 2025-12-05 10:18:00.426 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:00 compute-0 nova_compute[257087]: 2025-12-05 10:18:00.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:00 compute-0 nova_compute[257087]: 2025-12-05 10:18:00.525 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:00 compute-0 nova_compute[257087]: 2025-12-05 10:18:00.670 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:00 compute-0 nova_compute[257087]: 2025-12-05 10:18:00.710 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:00 compute-0 nova_compute[257087]: 2025-12-05 10:18:00.869 257094 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764929865.8684232, d5af7919-6b0b-4f37-9f5b-ed2b11e11a85 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:18:00 compute-0 nova_compute[257087]: 2025-12-05 10:18:00.870 257094 INFO nova.compute.manager [-] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] VM Stopped (Lifecycle Event)
Dec 05 10:18:00 compute-0 nova_compute[257087]: 2025-12-05 10:18:00.913 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:01 compute-0 nova_compute[257087]: 2025-12-05 10:18:01.051 257094 DEBUG nova.compute.manager [None req-69560934-706e-4096-b92c-ff65fcda8bbc - - - - - -] [instance: d5af7919-6b0b-4f37-9f5b-ed2b11e11a85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:18:01 compute-0 nova_compute[257087]: 2025-12-05 10:18:01.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:01 compute-0 nova_compute[257087]: 2025-12-05 10:18:01.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:01.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:01 compute-0 ceph-mon[74418]: pgmap v865: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Dec 05 10:18:01 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1213428747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:18:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:01.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:01 compute-0 nova_compute[257087]: 2025-12-05 10:18:01.683 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:01 compute-0 nova_compute[257087]: 2025-12-05 10:18:01.684 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:01 compute-0 nova_compute[257087]: 2025-12-05 10:18:01.684 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:01 compute-0 nova_compute[257087]: 2025-12-05 10:18:01.684 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:18:01 compute-0 nova_compute[257087]: 2025-12-05 10:18:01.684 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:18:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 05 10:18:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:18:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3865367274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:18:02 compute-0 nova_compute[257087]: 2025-12-05 10:18:02.178 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:18:02 compute-0 nova_compute[257087]: 2025-12-05 10:18:02.423 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:02 compute-0 nova_compute[257087]: 2025-12-05 10:18:02.499 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:18:02 compute-0 nova_compute[257087]: 2025-12-05 10:18:02.501 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4553MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:18:02 compute-0 nova_compute[257087]: 2025-12-05 10:18:02.501 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:02 compute-0 nova_compute[257087]: 2025-12-05 10:18:02.502 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:02 compute-0 ceph-mon[74418]: pgmap v866: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 05 10:18:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3865367274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:18:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3011275788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:18:02 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec 05 10:18:02 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:02.981651) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:18:02 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec 05 10:18:02 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929882981867, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2122, "num_deletes": 251, "total_data_size": 4404286, "memory_usage": 4470000, "flush_reason": "Manual Compaction"}
Dec 05 10:18:02 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec 05 10:18:03 compute-0 nova_compute[257087]: 2025-12-05 10:18:03.061 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:18:03 compute-0 nova_compute[257087]: 2025-12-05 10:18:03.062 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:18:03 compute-0 nova_compute[257087]: 2025-12-05 10:18:03.104 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929883284220, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4235555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24897, "largest_seqno": 27018, "table_properties": {"data_size": 4225907, "index_size": 6078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19810, "raw_average_key_size": 20, "raw_value_size": 4206741, "raw_average_value_size": 4319, "num_data_blocks": 261, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929675, "oldest_key_time": 1764929675, "file_creation_time": 1764929882, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 302691 microseconds, and 14452 cpu microseconds.
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.284340) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4235555 bytes OK
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.284399) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.288824) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.288885) EVENT_LOG_v1 {"time_micros": 1764929883288875, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.288933) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4395565, prev total WAL file size 4395565, number of live WAL files 2.
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.290192) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(4136KB)], [56(12MB)]
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929883290349, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 17310277, "oldest_snapshot_seqno": -1}
Dec 05 10:18:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5974 keys, 15151355 bytes, temperature: kUnknown
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929883495401, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 15151355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15109981, "index_size": 25328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 152036, "raw_average_key_size": 25, "raw_value_size": 15000548, "raw_average_value_size": 2510, "num_data_blocks": 1026, "num_entries": 5974, "num_filter_entries": 5974, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764929883, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:18:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:03.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:18:03 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329519085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:18:03 compute-0 nova_compute[257087]: 2025-12-05 10:18:03.616 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:18:03 compute-0 nova_compute[257087]: 2025-12-05 10:18:03.623 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:18:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:03.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:03.692Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:18:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:03.693Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.516794) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 15151355 bytes
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.798591) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.4 rd, 73.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.5 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6494, records dropped: 520 output_compression: NoCompression
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.798640) EVENT_LOG_v1 {"time_micros": 1764929883798621, "job": 30, "event": "compaction_finished", "compaction_time_micros": 205191, "compaction_time_cpu_micros": 37479, "output_level": 6, "num_output_files": 1, "total_output_size": 15151355, "num_input_records": 6494, "num_output_records": 5974, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929883800212, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929883804913, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.290041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.805094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.805104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.805108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.805112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:03 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:18:03.805116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 05 10:18:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:04 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:04 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:04 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:04 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:05 compute-0 nova_compute[257087]: 2025-12-05 10:18:05.428 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:18:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:05.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:18:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:05] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec 05 10:18:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:05] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec 05 10:18:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:05.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Dec 05 10:18:05 compute-0 nova_compute[257087]: 2025-12-05 10:18:05.915 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2321564914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:18:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/329519085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:18:06 compute-0 nova_compute[257087]: 2025-12-05 10:18:06.579 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:18:06 compute-0 nova_compute[257087]: 2025-12-05 10:18:06.612 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:18:06 compute-0 nova_compute[257087]: 2025-12-05 10:18:06.612 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:07.395Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:18:07 compute-0 ceph-mon[74418]: pgmap v867: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 05 10:18:07 compute-0 ceph-mon[74418]: pgmap v868: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Dec 05 10:18:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:07.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 05 10:18:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:07.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 05 10:18:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:18:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:08 compute-0 ceph-mon[74418]: pgmap v869: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 0 op/s
Dec 05 10:18:08 compute-0 nova_compute[257087]: 2025-12-05 10:18:08.612 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:08 compute-0 nova_compute[257087]: 2025-12-05 10:18:08.613 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:18:08 compute-0 nova_compute[257087]: 2025-12-05 10:18:08.613 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:18:08 compute-0 nova_compute[257087]: 2025-12-05 10:18:08.629 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:18:08 compute-0 nova_compute[257087]: 2025-12-05 10:18:08.629 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:08 compute-0 nova_compute[257087]: 2025-12-05 10:18:08.630 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:08 compute-0 nova_compute[257087]: 2025-12-05 10:18:08.631 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:18:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:09 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:09 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:09.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:09.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Dec 05 10:18:10 compute-0 nova_compute[257087]: 2025-12-05 10:18:10.431 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:10 compute-0 ceph-mon[74418]: pgmap v870: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Dec 05 10:18:10 compute-0 nova_compute[257087]: 2025-12-05 10:18:10.918 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:11.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:11.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:18:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:18:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:18:12 compute-0 ceph-mon[74418]: pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:18:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:18:13 compute-0 sudo[270081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:18:13 compute-0 sudo[270081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:13 compute-0 sudo[270081]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:13.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:13.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:13.694Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:18:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:18:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:14 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:14 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:14 compute-0 ceph-mon[74418]: pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:18:15 compute-0 podman[270108]: 2025-12-05 10:18:15.406014843 +0000 UTC m=+0.066762886 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:18:15 compute-0 podman[270109]: 2025-12-05 10:18:15.409917059 +0000 UTC m=+0.070753194 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:18:15 compute-0 nova_compute[257087]: 2025-12-05 10:18:15.434 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:15.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:15] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Dec 05 10:18:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:15] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Dec 05 10:18:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:15.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:18:15 compute-0 nova_compute[257087]: 2025-12-05 10:18:15.969 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:16 compute-0 ceph-mon[74418]: pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:18:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:17.396Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:18:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:18:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:17.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:18:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:17.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:18:18 compute-0 ceph-mon[74418]: pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 10:18:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:19 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [WARNING] 338/101819 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 05 10:18:19 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl[97557]: [ALERT] 338/101819 (4) : backend 'backend' has no server available!
Dec 05 10:18:19 compute-0 podman[270152]: 2025-12-05 10:18:19.530531198 +0000 UTC m=+0.193529570 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:18:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:19.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:19.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 50 op/s
Dec 05 10:18:20 compute-0 nova_compute[257087]: 2025-12-05 10:18:20.436 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:18:20.575 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:18:20.576 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:18:20.576 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:20 compute-0 nova_compute[257087]: 2025-12-05 10:18:20.971 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:21.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:21 compute-0 ceph-mon[74418]: pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 50 op/s
Dec 05 10:18:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1574770103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:18:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:21.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 05 10:18:22 compute-0 ceph-mon[74418]: pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 05 10:18:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:23.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:23.696Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:18:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:23.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 63 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 977 KiB/s wr, 83 op/s
Dec 05 10:18:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:24 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:24 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:24 compute-0 ceph-mon[74418]: pgmap v877: 353 pgs: 353 active+clean; 63 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 977 KiB/s wr, 83 op/s
Dec 05 10:18:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:25 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:25 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:25 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:25 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:25 compute-0 nova_compute[257087]: 2025-12-05 10:18:25.474 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:25.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:25] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Dec 05 10:18:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:25] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Dec 05 10:18:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:25.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Dec 05 10:18:25 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3548589784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:18:25 compute-0 nova_compute[257087]: 2025-12-05 10:18:25.977 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:26 compute-0 ceph-mon[74418]: pgmap v878: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Dec 05 10:18:26 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1575831511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:18:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:27.397Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:18:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:27.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:18:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:18:27
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['volumes', '.nfs', 'backups', '.mgr', 'images', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:18:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:27.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Dec 05 10:18:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:18:27 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:18:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:18:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:28 compute-0 ceph-mon[74418]: pgmap v879: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Dec 05 10:18:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:29.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:29.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Dec 05 10:18:30 compute-0 nova_compute[257087]: 2025-12-05 10:18:30.477 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:30 compute-0 nova_compute[257087]: 2025-12-05 10:18:30.980 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:30 compute-0 ceph-mon[74418]: pgmap v880: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Dec 05 10:18:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:31.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:31.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec 05 10:18:32 compute-0 ceph-mon[74418]: pgmap v881: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec 05 10:18:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:33 compute-0 sudo[270193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:18:33 compute-0 sudo[270193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:33 compute-0 sudo[270193]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:33.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:33.698Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:18:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:33.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 05 10:18:35 compute-0 nova_compute[257087]: 2025-12-05 10:18:35.528 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:18:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:35.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:18:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:35] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec 05 10:18:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:35] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec 05 10:18:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:35.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 850 KiB/s wr, 79 op/s
Dec 05 10:18:35 compute-0 nova_compute[257087]: 2025-12-05 10:18:35.983 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:36 compute-0 ceph-mon[74418]: pgmap v882: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 05 10:18:37 compute-0 ceph-mon[74418]: pgmap v883: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 850 KiB/s wr, 79 op/s
Dec 05 10:18:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:37.398Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:18:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:37.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:37.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Dec 05 10:18:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:38 compute-0 ceph-mon[74418]: pgmap v884: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Dec 05 10:18:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:39.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Dec 05 10:18:40 compute-0 ceph-mon[74418]: pgmap v885: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Dec 05 10:18:40 compute-0 nova_compute[257087]: 2025-12-05 10:18:40.529 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:40 compute-0 nova_compute[257087]: 2025-12-05 10:18:40.985 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:18:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:41.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:18:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:41.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 646 KiB/s rd, 0 B/s wr, 22 op/s
Dec 05 10:18:42 compute-0 ceph-mon[74418]: pgmap v886: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 646 KiB/s rd, 0 B/s wr, 22 op/s
Dec 05 10:18:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:18:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:18:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:18:43 compute-0 ovn_controller[154822]: 2025-12-05T10:18:43Z|00037|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec 05 10:18:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:43.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:43.700Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:18:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:43.700Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:18:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:43.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 108 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 787 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Dec 05 10:18:44 compute-0 ceph-mon[74418]: pgmap v887: 353 pgs: 353 active+clean; 108 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 787 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Dec 05 10:18:45 compute-0 nova_compute[257087]: 2025-12-05 10:18:45.531 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:18:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:45.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:18:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:45] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec 05 10:18:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:45] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec 05 10:18:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:45.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 120 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 425 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:18:45 compute-0 nova_compute[257087]: 2025-12-05 10:18:45.988 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:46 compute-0 podman[270232]: 2025-12-05 10:18:46.395400894 +0000 UTC m=+0.059480227 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 05 10:18:46 compute-0 podman[270233]: 2025-12-05 10:18:46.405483928 +0000 UTC m=+0.064580906 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Dec 05 10:18:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:47.399Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:18:47 compute-0 ceph-mon[74418]: pgmap v888: 353 pgs: 353 active+clean; 120 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 425 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:18:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:47.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:47.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 120 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 295 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 05 10:18:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:49 compute-0 sudo[270271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:18:49 compute-0 sudo[270271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:49 compute-0 sudo[270271]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:49 compute-0 sudo[270296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:18:49 compute-0 sudo[270296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:49 compute-0 ceph-mon[74418]: pgmap v889: 353 pgs: 353 active+clean; 120 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 295 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 05 10:18:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:49.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:49 compute-0 sudo[270296]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:49.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:18:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:18:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:18:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:18:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:18:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:18:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:18:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:18:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:18:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:18:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:18:49 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:18:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:18:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:18:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:18:49 compute-0 sudo[270354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:18:49 compute-0 sudo[270354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:49 compute-0 sudo[270354]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:49 compute-0 sudo[270385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:18:49 compute-0 sudo[270385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:50 compute-0 podman[270378]: 2025-12-05 10:18:50.050373788 +0000 UTC m=+0.141318802 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:18:50 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:18:50 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:18:50 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:18:50 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:18:50 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:18:50 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:18:50 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:18:50 compute-0 ceph-mon[74418]: pgmap v890: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 10:18:50 compute-0 podman[270471]: 2025-12-05 10:18:50.418640206 +0000 UTC m=+0.045411155 container create 3b477be80182d8321e817056e5a5ad943064ea0c83be7d52cf97bf0001597ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:18:50 compute-0 systemd[1]: Started libpod-conmon-3b477be80182d8321e817056e5a5ad943064ea0c83be7d52cf97bf0001597ae6.scope.
Dec 05 10:18:50 compute-0 podman[270471]: 2025-12-05 10:18:50.395172139 +0000 UTC m=+0.021943108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:18:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:18:50 compute-0 podman[270471]: 2025-12-05 10:18:50.535606445 +0000 UTC m=+0.162377414 container init 3b477be80182d8321e817056e5a5ad943064ea0c83be7d52cf97bf0001597ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_taussig, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:18:50 compute-0 nova_compute[257087]: 2025-12-05 10:18:50.534 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:50 compute-0 podman[270471]: 2025-12-05 10:18:50.54646461 +0000 UTC m=+0.173235559 container start 3b477be80182d8321e817056e5a5ad943064ea0c83be7d52cf97bf0001597ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_taussig, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:18:50 compute-0 podman[270471]: 2025-12-05 10:18:50.550272454 +0000 UTC m=+0.177043403 container attach 3b477be80182d8321e817056e5a5ad943064ea0c83be7d52cf97bf0001597ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:18:50 compute-0 stoic_taussig[270489]: 167 167
Dec 05 10:18:50 compute-0 systemd[1]: libpod-3b477be80182d8321e817056e5a5ad943064ea0c83be7d52cf97bf0001597ae6.scope: Deactivated successfully.
Dec 05 10:18:50 compute-0 podman[270471]: 2025-12-05 10:18:50.55453789 +0000 UTC m=+0.181308839 container died 3b477be80182d8321e817056e5a5ad943064ea0c83be7d52cf97bf0001597ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:18:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6383805d8c0124580e07ee23a39f25da3899ac3ad4dd5e40acdee4456294c4a-merged.mount: Deactivated successfully.
Dec 05 10:18:50 compute-0 podman[270471]: 2025-12-05 10:18:50.600861498 +0000 UTC m=+0.227632447 container remove 3b477be80182d8321e817056e5a5ad943064ea0c83be7d52cf97bf0001597ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_taussig, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 10:18:50 compute-0 systemd[1]: libpod-conmon-3b477be80182d8321e817056e5a5ad943064ea0c83be7d52cf97bf0001597ae6.scope: Deactivated successfully.
Dec 05 10:18:50 compute-0 podman[270513]: 2025-12-05 10:18:50.78965575 +0000 UTC m=+0.051504871 container create b47fc3ab70b9181399b5a6cd2ba1e5836fa4af47c1fe3f4a9396b87f346b06d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_morse, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:18:50 compute-0 systemd[1]: Started libpod-conmon-b47fc3ab70b9181399b5a6cd2ba1e5836fa4af47c1fe3f4a9396b87f346b06d8.scope.
Dec 05 10:18:50 compute-0 podman[270513]: 2025-12-05 10:18:50.769341827 +0000 UTC m=+0.031190968 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:18:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0db788cf1f8e02268b56db29daa42bc71b694ee6434bba6cb081fef35ead5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0db788cf1f8e02268b56db29daa42bc71b694ee6434bba6cb081fef35ead5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0db788cf1f8e02268b56db29daa42bc71b694ee6434bba6cb081fef35ead5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0db788cf1f8e02268b56db29daa42bc71b694ee6434bba6cb081fef35ead5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0db788cf1f8e02268b56db29daa42bc71b694ee6434bba6cb081fef35ead5c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:50 compute-0 podman[270513]: 2025-12-05 10:18:50.891446436 +0000 UTC m=+0.153295557 container init b47fc3ab70b9181399b5a6cd2ba1e5836fa4af47c1fe3f4a9396b87f346b06d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_morse, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 10:18:50 compute-0 podman[270513]: 2025-12-05 10:18:50.89859934 +0000 UTC m=+0.160448451 container start b47fc3ab70b9181399b5a6cd2ba1e5836fa4af47c1fe3f4a9396b87f346b06d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_morse, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 10:18:50 compute-0 podman[270513]: 2025-12-05 10:18:50.901578452 +0000 UTC m=+0.163432553 container attach b47fc3ab70b9181399b5a6cd2ba1e5836fa4af47c1fe3f4a9396b87f346b06d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:18:50 compute-0 nova_compute[257087]: 2025-12-05 10:18:50.991 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:51 compute-0 loving_morse[270529]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:18:51 compute-0 loving_morse[270529]: --> All data devices are unavailable
Dec 05 10:18:51 compute-0 systemd[1]: libpod-b47fc3ab70b9181399b5a6cd2ba1e5836fa4af47c1fe3f4a9396b87f346b06d8.scope: Deactivated successfully.
Dec 05 10:18:51 compute-0 podman[270513]: 2025-12-05 10:18:51.278057913 +0000 UTC m=+0.539907024 container died b47fc3ab70b9181399b5a6cd2ba1e5836fa4af47c1fe3f4a9396b87f346b06d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 10:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc0db788cf1f8e02268b56db29daa42bc71b694ee6434bba6cb081fef35ead5c-merged.mount: Deactivated successfully.
Dec 05 10:18:51 compute-0 podman[270513]: 2025-12-05 10:18:51.318531273 +0000 UTC m=+0.580380384 container remove b47fc3ab70b9181399b5a6cd2ba1e5836fa4af47c1fe3f4a9396b87f346b06d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_morse, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 10:18:51 compute-0 systemd[1]: libpod-conmon-b47fc3ab70b9181399b5a6cd2ba1e5836fa4af47c1fe3f4a9396b87f346b06d8.scope: Deactivated successfully.
Dec 05 10:18:51 compute-0 sudo[270385]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:51 compute-0 sudo[270554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:18:51 compute-0 sudo[270554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:51 compute-0 sudo[270554]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:51 compute-0 sudo[270579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:18:51 compute-0 sudo[270579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:51.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:51.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:18:52 compute-0 podman[270645]: 2025-12-05 10:18:52.035712162 +0000 UTC m=+0.116862614 container create e2d3dd8d9d279bb9bdc029550e02c1e121160ec0aa5beca8f1dabb074d72fdc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_germain, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 10:18:52 compute-0 podman[270645]: 2025-12-05 10:18:51.944318661 +0000 UTC m=+0.025469133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:18:52 compute-0 systemd[1]: Started libpod-conmon-e2d3dd8d9d279bb9bdc029550e02c1e121160ec0aa5beca8f1dabb074d72fdc5.scope.
Dec 05 10:18:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:18:52 compute-0 podman[270645]: 2025-12-05 10:18:52.12943108 +0000 UTC m=+0.210581562 container init e2d3dd8d9d279bb9bdc029550e02c1e121160ec0aa5beca8f1dabb074d72fdc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_germain, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:18:52 compute-0 podman[270645]: 2025-12-05 10:18:52.138956378 +0000 UTC m=+0.220106830 container start e2d3dd8d9d279bb9bdc029550e02c1e121160ec0aa5beca8f1dabb074d72fdc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:18:52 compute-0 podman[270645]: 2025-12-05 10:18:52.142906486 +0000 UTC m=+0.224056958 container attach e2d3dd8d9d279bb9bdc029550e02c1e121160ec0aa5beca8f1dabb074d72fdc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 10:18:52 compute-0 wonderful_germain[270661]: 167 167
Dec 05 10:18:52 compute-0 systemd[1]: libpod-e2d3dd8d9d279bb9bdc029550e02c1e121160ec0aa5beca8f1dabb074d72fdc5.scope: Deactivated successfully.
Dec 05 10:18:52 compute-0 podman[270645]: 2025-12-05 10:18:52.147146331 +0000 UTC m=+0.228296793 container died e2d3dd8d9d279bb9bdc029550e02c1e121160ec0aa5beca8f1dabb074d72fdc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_germain, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-31b7399a23d61d0484791f26d7b6e9836cc26a7f3f4eb436ab242a4ebbfef5a2-merged.mount: Deactivated successfully.
Dec 05 10:18:52 compute-0 podman[270645]: 2025-12-05 10:18:52.178350889 +0000 UTC m=+0.259501341 container remove e2d3dd8d9d279bb9bdc029550e02c1e121160ec0aa5beca8f1dabb074d72fdc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:18:52 compute-0 systemd[1]: libpod-conmon-e2d3dd8d9d279bb9bdc029550e02c1e121160ec0aa5beca8f1dabb074d72fdc5.scope: Deactivated successfully.
Dec 05 10:18:52 compute-0 podman[270686]: 2025-12-05 10:18:52.377367988 +0000 UTC m=+0.062493889 container create 81c7f99cb700c2fdb0d171aeedf857cf3b92f1731f5a0e4fa63da3e90fbccf83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dubinsky, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:18:52 compute-0 systemd[1]: Started libpod-conmon-81c7f99cb700c2fdb0d171aeedf857cf3b92f1731f5a0e4fa63da3e90fbccf83.scope.
Dec 05 10:18:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1913e355b8d2885c28ca705264ecd7a8b199a69d87b8db5662960a5f75ec26f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1913e355b8d2885c28ca705264ecd7a8b199a69d87b8db5662960a5f75ec26f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1913e355b8d2885c28ca705264ecd7a8b199a69d87b8db5662960a5f75ec26f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1913e355b8d2885c28ca705264ecd7a8b199a69d87b8db5662960a5f75ec26f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:52 compute-0 podman[270686]: 2025-12-05 10:18:52.354720773 +0000 UTC m=+0.039846684 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:18:52 compute-0 podman[270686]: 2025-12-05 10:18:52.45507348 +0000 UTC m=+0.140199411 container init 81c7f99cb700c2fdb0d171aeedf857cf3b92f1731f5a0e4fa63da3e90fbccf83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dubinsky, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:18:52 compute-0 podman[270686]: 2025-12-05 10:18:52.462993016 +0000 UTC m=+0.148118917 container start 81c7f99cb700c2fdb0d171aeedf857cf3b92f1731f5a0e4fa63da3e90fbccf83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:18:52 compute-0 podman[270686]: 2025-12-05 10:18:52.466420179 +0000 UTC m=+0.151546090 container attach 81c7f99cb700c2fdb0d171aeedf857cf3b92f1731f5a0e4fa63da3e90fbccf83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]: {
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:     "1": [
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:         {
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             "devices": [
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "/dev/loop3"
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             ],
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             "lv_name": "ceph_lv0",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             "lv_size": "21470642176",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             "name": "ceph_lv0",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             "tags": {
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.cluster_name": "ceph",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.crush_device_class": "",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.encrypted": "0",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.osd_id": "1",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.type": "block",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.vdo": "0",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:                 "ceph.with_tpm": "0"
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             },
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             "type": "block",
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:             "vg_name": "ceph_vg0"
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:         }
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]:     ]
Dec 05 10:18:52 compute-0 elegant_dubinsky[270703]: }
Dec 05 10:18:52 compute-0 systemd[1]: libpod-81c7f99cb700c2fdb0d171aeedf857cf3b92f1731f5a0e4fa63da3e90fbccf83.scope: Deactivated successfully.
Dec 05 10:18:52 compute-0 podman[270686]: 2025-12-05 10:18:52.777327668 +0000 UTC m=+0.462453599 container died 81c7f99cb700c2fdb0d171aeedf857cf3b92f1731f5a0e4fa63da3e90fbccf83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dubinsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 10:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1913e355b8d2885c28ca705264ecd7a8b199a69d87b8db5662960a5f75ec26f9-merged.mount: Deactivated successfully.
Dec 05 10:18:53 compute-0 podman[270686]: 2025-12-05 10:18:52.823531205 +0000 UTC m=+0.508657106 container remove 81c7f99cb700c2fdb0d171aeedf857cf3b92f1731f5a0e4fa63da3e90fbccf83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dubinsky, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 10:18:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:53 compute-0 ceph-mon[74418]: pgmap v891: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:18:53 compute-0 systemd[1]: libpod-conmon-81c7f99cb700c2fdb0d171aeedf857cf3b92f1731f5a0e4fa63da3e90fbccf83.scope: Deactivated successfully.
Dec 05 10:18:53 compute-0 sudo[270579]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:53 compute-0 sudo[270722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:18:53 compute-0 sudo[270722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:53 compute-0 sudo[270722]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:53 compute-0 sudo[270747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:18:53 compute-0 sudo[270747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:53.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:53.701Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:18:53 compute-0 podman[270815]: 2025-12-05 10:18:53.725636462 +0000 UTC m=+0.091080997 container create cb903f05bfb4bbd6628e74801145187c4ca3c5f60525aa7e2942d35dd10c8722 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:18:53 compute-0 sudo[270829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:18:53 compute-0 sudo[270829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:53 compute-0 sudo[270829]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:53 compute-0 podman[270815]: 2025-12-05 10:18:53.661579921 +0000 UTC m=+0.027024496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:18:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:53.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:53 compute-0 systemd[1]: Started libpod-conmon-cb903f05bfb4bbd6628e74801145187c4ca3c5f60525aa7e2942d35dd10c8722.scope.
Dec 05 10:18:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:18:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:18:53 compute-0 podman[270815]: 2025-12-05 10:18:53.864735472 +0000 UTC m=+0.230180027 container init cb903f05bfb4bbd6628e74801145187c4ca3c5f60525aa7e2942d35dd10c8722 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:18:53 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:18:53.870 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:18:53 compute-0 nova_compute[257087]: 2025-12-05 10:18:53.869 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:53 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:18:53.871 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:18:53 compute-0 podman[270815]: 2025-12-05 10:18:53.877396697 +0000 UTC m=+0.242841232 container start cb903f05bfb4bbd6628e74801145187c4ca3c5f60525aa7e2942d35dd10c8722 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:18:53 compute-0 wonderful_brattain[270856]: 167 167
Dec 05 10:18:53 compute-0 systemd[1]: libpod-cb903f05bfb4bbd6628e74801145187c4ca3c5f60525aa7e2942d35dd10c8722.scope: Deactivated successfully.
Dec 05 10:18:53 compute-0 podman[270815]: 2025-12-05 10:18:53.89442634 +0000 UTC m=+0.259870905 container attach cb903f05bfb4bbd6628e74801145187c4ca3c5f60525aa7e2942d35dd10c8722 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:18:53 compute-0 podman[270815]: 2025-12-05 10:18:53.895718464 +0000 UTC m=+0.261162999 container died cb903f05bfb4bbd6628e74801145187c4ca3c5f60525aa7e2942d35dd10c8722 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 10:18:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6b988ccbe242b19b180adb3e37d42111c4584ce97c97d2878c9d1c96b08fc3b-merged.mount: Deactivated successfully.
Dec 05 10:18:54 compute-0 podman[270815]: 2025-12-05 10:18:54.020954718 +0000 UTC m=+0.386399243 container remove cb903f05bfb4bbd6628e74801145187c4ca3c5f60525aa7e2942d35dd10c8722 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 10:18:54 compute-0 systemd[1]: libpod-conmon-cb903f05bfb4bbd6628e74801145187c4ca3c5f60525aa7e2942d35dd10c8722.scope: Deactivated successfully.
Dec 05 10:18:54 compute-0 ceph-mon[74418]: pgmap v892: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:18:54 compute-0 podman[270882]: 2025-12-05 10:18:54.211945849 +0000 UTC m=+0.057569165 container create ee43432bd238bcede48243c16eb22bfb331808fe749dd103f56554b0b6923d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:18:54 compute-0 systemd[1]: Started libpod-conmon-ee43432bd238bcede48243c16eb22bfb331808fe749dd103f56554b0b6923d6e.scope.
Dec 05 10:18:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:18:54 compute-0 podman[270882]: 2025-12-05 10:18:54.18990346 +0000 UTC m=+0.035526796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe6a6d8b646d9b6ca629d9cae6e301e3afa6e386aadfdff8a10bcd0d920994a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe6a6d8b646d9b6ca629d9cae6e301e3afa6e386aadfdff8a10bcd0d920994a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe6a6d8b646d9b6ca629d9cae6e301e3afa6e386aadfdff8a10bcd0d920994a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe6a6d8b646d9b6ca629d9cae6e301e3afa6e386aadfdff8a10bcd0d920994a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:54 compute-0 podman[270882]: 2025-12-05 10:18:54.302003886 +0000 UTC m=+0.147627222 container init ee43432bd238bcede48243c16eb22bfb331808fe749dd103f56554b0b6923d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:18:54 compute-0 podman[270882]: 2025-12-05 10:18:54.312840841 +0000 UTC m=+0.158464157 container start ee43432bd238bcede48243c16eb22bfb331808fe749dd103f56554b0b6923d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mirzakhani, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec 05 10:18:54 compute-0 podman[270882]: 2025-12-05 10:18:54.316955013 +0000 UTC m=+0.162578569 container attach ee43432bd238bcede48243c16eb22bfb331808fe749dd103f56554b0b6923d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mirzakhani, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:18:55 compute-0 lvm[270976]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:18:55 compute-0 lvm[270976]: VG ceph_vg0 finished
Dec 05 10:18:55 compute-0 youthful_mirzakhani[270899]: {}
Dec 05 10:18:55 compute-0 systemd[1]: libpod-ee43432bd238bcede48243c16eb22bfb331808fe749dd103f56554b0b6923d6e.scope: Deactivated successfully.
Dec 05 10:18:55 compute-0 systemd[1]: libpod-ee43432bd238bcede48243c16eb22bfb331808fe749dd103f56554b0b6923d6e.scope: Consumed 1.562s CPU time.
Dec 05 10:18:55 compute-0 podman[270882]: 2025-12-05 10:18:55.249645942 +0000 UTC m=+1.095269318 container died ee43432bd238bcede48243c16eb22bfb331808fe749dd103f56554b0b6923d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:18:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fe6a6d8b646d9b6ca629d9cae6e301e3afa6e386aadfdff8a10bcd0d920994a-merged.mount: Deactivated successfully.
Dec 05 10:18:55 compute-0 podman[270882]: 2025-12-05 10:18:55.306391844 +0000 UTC m=+1.152015160 container remove ee43432bd238bcede48243c16eb22bfb331808fe749dd103f56554b0b6923d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Dec 05 10:18:55 compute-0 systemd[1]: libpod-conmon-ee43432bd238bcede48243c16eb22bfb331808fe749dd103f56554b0b6923d6e.scope: Deactivated successfully.
Dec 05 10:18:55 compute-0 sudo[270747]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:18:55 compute-0 nova_compute[257087]: 2025-12-05 10:18:55.539 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:55.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:55] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec 05 10:18:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:18:55] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec 05 10:18:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:18:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:18:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:55.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:18:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 770 KiB/s wr, 32 op/s
Dec 05 10:18:55 compute-0 sudo[270994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:18:55 compute-0 sudo[270994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:18:55 compute-0 sudo[270994]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:55 compute-0 nova_compute[257087]: 2025-12-05 10:18:55.995 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:18:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:18:56 compute-0 ceph-mon[74418]: pgmap v893: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 770 KiB/s wr, 32 op/s
Dec 05 10:18:56 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:18:56.872 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:18:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:57.399Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:18:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:18:57.404Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:18:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:18:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:18:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:18:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:57.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:18:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:18:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:18:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:18:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:18:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:18:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:18:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:18:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:57.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:18:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2351361307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:18:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2351361307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:18:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:18:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 14 KiB/s wr, 5 op/s
Dec 05 10:18:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:18:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:18:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:18:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:18:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:18:58 compute-0 ceph-mon[74418]: pgmap v894: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 14 KiB/s wr, 5 op/s
Dec 05 10:18:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:18:59.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:18:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:18:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:18:59.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:18:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 15 KiB/s wr, 6 op/s
Dec 05 10:19:00 compute-0 nova_compute[257087]: 2025-12-05 10:19:00.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:00 compute-0 nova_compute[257087]: 2025-12-05 10:19:00.531 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:00 compute-0 nova_compute[257087]: 2025-12-05 10:19:00.531 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:00 compute-0 nova_compute[257087]: 2025-12-05 10:19:00.542 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:01 compute-0 nova_compute[257087]: 2025-12-05 10:19:01.003 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:01 compute-0 nova_compute[257087]: 2025-12-05 10:19:01.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:01.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:01 compute-0 ceph-mon[74418]: pgmap v895: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 15 KiB/s wr, 6 op/s
Dec 05 10:19:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:01.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:01 compute-0 nova_compute[257087]: 2025-12-05 10:19:01.778 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:19:01 compute-0 nova_compute[257087]: 2025-12-05 10:19:01.780 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:19:01 compute-0 nova_compute[257087]: 2025-12-05 10:19:01.780 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:19:01 compute-0 nova_compute[257087]: 2025-12-05 10:19:01.781 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:19:01 compute-0 nova_compute[257087]: 2025-12-05 10:19:01.782 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:19:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 14 KiB/s wr, 1 op/s
Dec 05 10:19:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:19:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2858776875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:19:02 compute-0 nova_compute[257087]: 2025-12-05 10:19:02.293 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:19:02 compute-0 nova_compute[257087]: 2025-12-05 10:19:02.472 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:19:02 compute-0 nova_compute[257087]: 2025-12-05 10:19:02.474 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4552MB free_disk=59.9427375793457GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:19:02 compute-0 nova_compute[257087]: 2025-12-05 10:19:02.475 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:19:02 compute-0 nova_compute[257087]: 2025-12-05 10:19:02.475 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:19:02 compute-0 nova_compute[257087]: 2025-12-05 10:19:02.552 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:19:02 compute-0 nova_compute[257087]: 2025-12-05 10:19:02.553 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:19:02 compute-0 nova_compute[257087]: 2025-12-05 10:19:02.608 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:19:02 compute-0 ceph-mon[74418]: pgmap v896: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 14 KiB/s wr, 1 op/s
Dec 05 10:19:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3364406934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:19:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2858776875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:19:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:19:03 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754921777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:19:03 compute-0 nova_compute[257087]: 2025-12-05 10:19:03.103 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:19:03 compute-0 nova_compute[257087]: 2025-12-05 10:19:03.112 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:19:03 compute-0 nova_compute[257087]: 2025-12-05 10:19:03.127 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:19:03 compute-0 nova_compute[257087]: 2025-12-05 10:19:03.129 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:19:03 compute-0 nova_compute[257087]: 2025-12-05 10:19:03.129 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:19:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:03.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:03.702Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:19:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:03.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 14 KiB/s wr, 2 op/s
Dec 05 10:19:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1486172837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:19:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1754921777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:19:04 compute-0 nova_compute[257087]: 2025-12-05 10:19:04.125 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:04 compute-0 nova_compute[257087]: 2025-12-05 10:19:04.126 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:04 compute-0 nova_compute[257087]: 2025-12-05 10:19:04.126 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:19:04 compute-0 nova_compute[257087]: 2025-12-05 10:19:04.127 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:19:04 compute-0 nova_compute[257087]: 2025-12-05 10:19:04.153 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:19:04 compute-0 nova_compute[257087]: 2025-12-05 10:19:04.154 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:04 compute-0 nova_compute[257087]: 2025-12-05 10:19:04.155 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:04 compute-0 nova_compute[257087]: 2025-12-05 10:19:04.155 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:19:04 compute-0 nova_compute[257087]: 2025-12-05 10:19:04.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:05 compute-0 ceph-mon[74418]: pgmap v897: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 14 KiB/s wr, 2 op/s
Dec 05 10:19:05 compute-0 nova_compute[257087]: 2025-12-05 10:19:05.544 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:05.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:05] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Dec 05 10:19:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:05] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Dec 05 10:19:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:05.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 13 KiB/s wr, 1 op/s
Dec 05 10:19:06 compute-0 nova_compute[257087]: 2025-12-05 10:19:06.006 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:06 compute-0 ceph-mon[74418]: pgmap v898: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 13 KiB/s wr, 1 op/s
Dec 05 10:19:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/410273829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:19:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:07.405Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:19:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:07.405Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:19:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:07.405Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:19:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:07.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:07.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Dec 05 10:19:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:08 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2482583208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:19:08 compute-0 ceph-mon[74418]: pgmap v899: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Dec 05 10:19:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:09.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:09.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 6.3 KiB/s wr, 2 op/s
Dec 05 10:19:10 compute-0 nova_compute[257087]: 2025-12-05 10:19:10.581 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:10 compute-0 ceph-mon[74418]: pgmap v900: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 6.3 KiB/s wr, 2 op/s
Dec 05 10:19:11 compute-0 nova_compute[257087]: 2025-12-05 10:19:11.009 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:11.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:11.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Dec 05 10:19:12 compute-0 ceph-mon[74418]: pgmap v901: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Dec 05 10:19:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:19:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:19:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:19:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:13.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:13.703Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:19:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:13.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:13 compute-0 sudo[271081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:19:13 compute-0 sudo[271081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:19:13 compute-0 sudo[271081]: pam_unix(sudo:session): session closed for user root
Dec 05 10:19:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 5.3 KiB/s wr, 2 op/s
Dec 05 10:19:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:14 compute-0 ceph-mon[74418]: pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 5.3 KiB/s wr, 2 op/s
Dec 05 10:19:15 compute-0 nova_compute[257087]: 2025-12-05 10:19:15.583 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:15.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:15] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:19:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:15] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:19:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:15.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Dec 05 10:19:16 compute-0 nova_compute[257087]: 2025-12-05 10:19:16.010 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2895554027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:19:17 compute-0 ceph-mon[74418]: pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Dec 05 10:19:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:17.406Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:19:17 compute-0 podman[271111]: 2025-12-05 10:19:17.423845166 +0000 UTC m=+0.075988326 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:19:17 compute-0 podman[271110]: 2025-12-05 10:19:17.448490446 +0000 UTC m=+0.099630269 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 05 10:19:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:19:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:17.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:19:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:17.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Dec 05 10:19:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:18 compute-0 ceph-mon[74418]: pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Dec 05 10:19:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:19.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:19.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 167 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 05 10:19:20 compute-0 podman[271151]: 2025-12-05 10:19:20.446712301 +0000 UTC m=+0.108873650 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 10:19:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:19:20.575 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:19:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:19:20.576 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:19:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:19:20.576 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:19:20 compute-0 nova_compute[257087]: 2025-12-05 10:19:20.586 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:21 compute-0 nova_compute[257087]: 2025-12-05 10:19:21.012 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:21 compute-0 ceph-mon[74418]: pgmap v905: 353 pgs: 353 active+clean; 167 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 05 10:19:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2235861803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:19:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2521361700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:19:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:21.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:21.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 167 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:19:23 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 10:19:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:23 compute-0 ceph-mon[74418]: pgmap v906: 353 pgs: 353 active+clean; 167 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:19:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:23.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:23.704Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:19:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:23.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 633 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 05 10:19:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:24 compute-0 ceph-mon[74418]: pgmap v907: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 633 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 05 10:19:25 compute-0 nova_compute[257087]: 2025-12-05 10:19:25.589 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:25] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:19:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:25] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:19:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:25.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:25.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 880 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 05 10:19:26 compute-0 nova_compute[257087]: 2025-12-05 10:19:26.016 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:26 compute-0 ceph-mon[74418]: pgmap v908: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 880 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 05 10:19:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:27.408Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:19:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:19:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:19:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:19:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:27.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:19:27
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.nfs', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'images', '.mgr', 'vms', 'backups', 'default.rgw.log']
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:19:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:27.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 880 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 05 10:19:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:19:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011088311722667187 of space, bias 1.0, pg target 0.3326493516800156 quantized to 32 (current 32)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:19:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:19:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:29 compute-0 ceph-mon[74418]: pgmap v909: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 880 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 05 10:19:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:29.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:29.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Dec 05 10:19:30 compute-0 nova_compute[257087]: 2025-12-05 10:19:30.591 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:31 compute-0 nova_compute[257087]: 2025-12-05 10:19:31.017 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:31 compute-0 ceph-mon[74418]: pgmap v910: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Dec 05 10:19:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:31.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:31.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Dec 05 10:19:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:33 compute-0 ceph-mon[74418]: pgmap v911: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Dec 05 10:19:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:33.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:33.705Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:19:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:33.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 77 op/s
Dec 05 10:19:33 compute-0 sudo[271191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:19:33 compute-0 sudo[271191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:19:33 compute-0 sudo[271191]: pam_unix(sudo:session): session closed for user root
Dec 05 10:19:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:34 compute-0 ceph-mon[74418]: pgmap v912: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 77 op/s
Dec 05 10:19:35 compute-0 nova_compute[257087]: 2025-12-05 10:19:35.594 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:35] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Dec 05 10:19:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:35] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Dec 05 10:19:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:35.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:35.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 48 op/s
Dec 05 10:19:36 compute-0 nova_compute[257087]: 2025-12-05 10:19:36.019 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:36 compute-0 ceph-mon[74418]: pgmap v913: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 48 op/s
Dec 05 10:19:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:37.408Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:19:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:19:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:37.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:19:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:37.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 11 KiB/s wr, 38 op/s
Dec 05 10:19:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:38 compute-0 ceph-mon[74418]: pgmap v914: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 11 KiB/s wr, 38 op/s
Dec 05 10:19:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:39.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 98 op/s
Dec 05 10:19:40 compute-0 nova_compute[257087]: 2025-12-05 10:19:40.595 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:41 compute-0 nova_compute[257087]: 2025-12-05 10:19:41.021 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:41 compute-0 ceph-mon[74418]: pgmap v915: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 98 op/s
Dec 05 10:19:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:41.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:41.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:19:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:19:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:19:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:43 compute-0 ceph-mon[74418]: pgmap v916: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:19:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:19:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:43.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:43.706Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:19:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:43.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Dec 05 10:19:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:44 compute-0 ceph-mon[74418]: pgmap v917: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Dec 05 10:19:45 compute-0 nova_compute[257087]: 2025-12-05 10:19:45.597 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:45] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:19:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:45] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:19:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:45.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:19:46 compute-0 nova_compute[257087]: 2025-12-05 10:19:46.023 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:47.410Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:19:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:47.410Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:19:47 compute-0 ceph-mon[74418]: pgmap v918: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:19:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 247 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 05 10:19:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:48 compute-0 podman[271231]: 2025-12-05 10:19:48.398177878 +0000 UTC m=+0.061348809 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:19:48 compute-0 podman[271232]: 2025-12-05 10:19:48.412349523 +0000 UTC m=+0.069798478 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 10:19:48 compute-0 ceph-mon[74418]: pgmap v919: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 247 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 05 10:19:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:49.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:49.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:19:50 compute-0 nova_compute[257087]: 2025-12-05 10:19:50.599 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:51 compute-0 nova_compute[257087]: 2025-12-05 10:19:51.025 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:51 compute-0 ceph-mon[74418]: pgmap v920: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 05 10:19:51 compute-0 podman[271273]: 2025-12-05 10:19:51.460190586 +0000 UTC m=+0.119418416 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:19:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:51.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:51.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Dec 05 10:19:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:53 compute-0 ceph-mon[74418]: pgmap v921: 353 pgs: 353 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Dec 05 10:19:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:53.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:53.707Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:19:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:53.708Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:19:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:53.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 136 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 16 KiB/s wr, 23 op/s
Dec 05 10:19:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:54 compute-0 sudo[271302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:19:54 compute-0 sudo[271302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:19:54 compute-0 sudo[271302]: pam_unix(sudo:session): session closed for user root
Dec 05 10:19:55 compute-0 ceph-mon[74418]: pgmap v922: 353 pgs: 353 active+clean; 136 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 16 KiB/s wr, 23 op/s
Dec 05 10:19:55 compute-0 nova_compute[257087]: 2025-12-05 10:19:55.602 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:55] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:19:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:19:55] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:19:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.004000108s ======
Dec 05 10:19:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:55.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000108s
Dec 05 10:19:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:55.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec 05 10:19:56 compute-0 nova_compute[257087]: 2025-12-05 10:19:56.029 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2838647654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:19:56 compute-0 sudo[271329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:19:56 compute-0 sudo[271329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:19:56 compute-0 sudo[271329]: pam_unix(sudo:session): session closed for user root
Dec 05 10:19:56 compute-0 sudo[271354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:19:56 compute-0 sudo[271354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:19:56 compute-0 sudo[271354]: pam_unix(sudo:session): session closed for user root
Dec 05 10:19:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:19:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:19:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:19:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:19:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:19:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:19:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3765817517' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:19:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:19:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3765817517' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:19:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:19:57.411Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:19:57 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:19:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:19:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:19:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:19:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:19:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:19:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:19:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:19:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:19:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:19:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:57.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:57.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.2 KiB/s wr, 29 op/s
Dec 05 10:19:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:19:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:19:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:19:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:19:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:19:58 compute-0 ceph-mon[74418]: pgmap v923: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec 05 10:19:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:19:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:19:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3765817517' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:19:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3765817517' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:19:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:19:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:19:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:19:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:19:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:19:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:19:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:19:58 compute-0 sudo[271414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:19:58 compute-0 sudo[271414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:19:58 compute-0 sudo[271414]: pam_unix(sudo:session): session closed for user root
Dec 05 10:19:58 compute-0 sudo[271439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:19:58 compute-0 sudo[271439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:19:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:59 compute-0 podman[271506]: 2025-12-05 10:19:59.244673991 +0000 UTC m=+0.030021417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:19:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:19:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:19:59.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:19:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:19:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:19:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:19:59.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:19:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.2 KiB/s wr, 29 op/s
Dec 05 10:20:00 compute-0 nova_compute[257087]: 2025-12-05 10:20:00.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:00 compute-0 nova_compute[257087]: 2025-12-05 10:20:00.604 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:01 compute-0 nova_compute[257087]: 2025-12-05 10:20:01.033 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:01 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:20:01 compute-0 podman[271506]: 2025-12-05 10:20:01.23822481 +0000 UTC m=+2.023572206 container create fea417d8cecacbf934b27561b4921bf848da6f2f5d21a2a07793a154f72dd189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_galileo, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 10:20:01 compute-0 nova_compute[257087]: 2025-12-05 10:20:01.246 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:01 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:20:01.245 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:20:01 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:20:01.247 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:20:01 compute-0 systemd[1]: Started libpod-conmon-fea417d8cecacbf934b27561b4921bf848da6f2f5d21a2a07793a154f72dd189.scope.
Dec 05 10:20:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:20:01 compute-0 podman[271506]: 2025-12-05 10:20:01.352056514 +0000 UTC m=+2.137403940 container init fea417d8cecacbf934b27561b4921bf848da6f2f5d21a2a07793a154f72dd189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 10:20:01 compute-0 podman[271506]: 2025-12-05 10:20:01.361953524 +0000 UTC m=+2.147300920 container start fea417d8cecacbf934b27561b4921bf848da6f2f5d21a2a07793a154f72dd189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 05 10:20:01 compute-0 podman[271506]: 2025-12-05 10:20:01.366657271 +0000 UTC m=+2.152004687 container attach fea417d8cecacbf934b27561b4921bf848da6f2f5d21a2a07793a154f72dd189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_galileo, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:20:01 compute-0 blissful_galileo[271524]: 167 167
Dec 05 10:20:01 compute-0 systemd[1]: libpod-fea417d8cecacbf934b27561b4921bf848da6f2f5d21a2a07793a154f72dd189.scope: Deactivated successfully.
Dec 05 10:20:01 compute-0 podman[271506]: 2025-12-05 10:20:01.369875858 +0000 UTC m=+2.155223274 container died fea417d8cecacbf934b27561b4921bf848da6f2f5d21a2a07793a154f72dd189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_galileo, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:20:01 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:20:01 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:20:01 compute-0 ceph-mon[74418]: pgmap v924: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.2 KiB/s wr, 29 op/s
Dec 05 10:20:01 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:20:01 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:20:01 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:20:01 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:20:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7754413af9b809fcfd26d0142c7526d3ca1bf98913030032e7c52558c8eec8e8-merged.mount: Deactivated successfully.
Dec 05 10:20:01 compute-0 podman[271506]: 2025-12-05 10:20:01.428499372 +0000 UTC m=+2.213846758 container remove fea417d8cecacbf934b27561b4921bf848da6f2f5d21a2a07793a154f72dd189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_galileo, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 10:20:01 compute-0 systemd[1]: libpod-conmon-fea417d8cecacbf934b27561b4921bf848da6f2f5d21a2a07793a154f72dd189.scope: Deactivated successfully.
Dec 05 10:20:01 compute-0 nova_compute[257087]: 2025-12-05 10:20:01.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:01 compute-0 podman[271546]: 2025-12-05 10:20:01.617558721 +0000 UTC m=+0.072367518 container create c0545e0232bddbe7d2dfb76154e2509e18065259e07beae0d60c70edab9349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hawking, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:20:01 compute-0 systemd[1]: Started libpod-conmon-c0545e0232bddbe7d2dfb76154e2509e18065259e07beae0d60c70edab9349a5.scope.
Dec 05 10:20:01 compute-0 podman[271546]: 2025-12-05 10:20:01.572578468 +0000 UTC m=+0.027387285 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:20:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07898546c8a73a71f6a6da3d2dcb0387e5510d682723484ff305283c939d7257/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07898546c8a73a71f6a6da3d2dcb0387e5510d682723484ff305283c939d7257/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07898546c8a73a71f6a6da3d2dcb0387e5510d682723484ff305283c939d7257/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07898546c8a73a71f6a6da3d2dcb0387e5510d682723484ff305283c939d7257/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07898546c8a73a71f6a6da3d2dcb0387e5510d682723484ff305283c939d7257/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:01 compute-0 podman[271546]: 2025-12-05 10:20:01.698901091 +0000 UTC m=+0.153709908 container init c0545e0232bddbe7d2dfb76154e2509e18065259e07beae0d60c70edab9349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hawking, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:20:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:01.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:01 compute-0 podman[271546]: 2025-12-05 10:20:01.70919094 +0000 UTC m=+0.163999727 container start c0545e0232bddbe7d2dfb76154e2509e18065259e07beae0d60c70edab9349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hawking, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 10:20:01 compute-0 podman[271546]: 2025-12-05 10:20:01.712948853 +0000 UTC m=+0.167757660 container attach c0545e0232bddbe7d2dfb76154e2509e18065259e07beae0d60c70edab9349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:20:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:01.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 05 10:20:02 compute-0 beautiful_hawking[271562]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:20:02 compute-0 beautiful_hawking[271562]: --> All data devices are unavailable
Dec 05 10:20:02 compute-0 systemd[1]: libpod-c0545e0232bddbe7d2dfb76154e2509e18065259e07beae0d60c70edab9349a5.scope: Deactivated successfully.
Dec 05 10:20:02 compute-0 podman[271546]: 2025-12-05 10:20:02.075452414 +0000 UTC m=+0.530261211 container died c0545e0232bddbe7d2dfb76154e2509e18065259e07beae0d60c70edab9349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hawking, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 10:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-07898546c8a73a71f6a6da3d2dcb0387e5510d682723484ff305283c939d7257-merged.mount: Deactivated successfully.
Dec 05 10:20:02 compute-0 podman[271546]: 2025-12-05 10:20:02.114793214 +0000 UTC m=+0.569602011 container remove c0545e0232bddbe7d2dfb76154e2509e18065259e07beae0d60c70edab9349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hawking, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:20:02 compute-0 systemd[1]: libpod-conmon-c0545e0232bddbe7d2dfb76154e2509e18065259e07beae0d60c70edab9349a5.scope: Deactivated successfully.
Dec 05 10:20:02 compute-0 sudo[271439]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:02 compute-0 sudo[271589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:20:02 compute-0 sudo[271589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:20:02 compute-0 sudo[271589]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:02 compute-0 sudo[271615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:20:02 compute-0 sudo[271615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:20:02 compute-0 ceph-mon[74418]: pgmap v925: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.2 KiB/s wr, 29 op/s
Dec 05 10:20:02 compute-0 ceph-mon[74418]: overall HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:20:02 compute-0 ceph-mon[74418]: pgmap v926: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 05 10:20:02 compute-0 nova_compute[257087]: 2025-12-05 10:20:02.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:02 compute-0 nova_compute[257087]: 2025-12-05 10:20:02.527 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:02 compute-0 nova_compute[257087]: 2025-12-05 10:20:02.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:02 compute-0 podman[271682]: 2025-12-05 10:20:02.669051247 +0000 UTC m=+0.054452800 container create ced467ed2f5f33172190e2d63f2d6416b091927c5d56f566542a1a7d0639797e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_torvalds, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:20:02 compute-0 systemd[1]: Started libpod-conmon-ced467ed2f5f33172190e2d63f2d6416b091927c5d56f566542a1a7d0639797e.scope.
Dec 05 10:20:02 compute-0 podman[271682]: 2025-12-05 10:20:02.646490914 +0000 UTC m=+0.031892457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:20:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:20:02 compute-0 podman[271682]: 2025-12-05 10:20:02.775790628 +0000 UTC m=+0.161192211 container init ced467ed2f5f33172190e2d63f2d6416b091927c5d56f566542a1a7d0639797e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 10:20:02 compute-0 podman[271682]: 2025-12-05 10:20:02.78688625 +0000 UTC m=+0.172287773 container start ced467ed2f5f33172190e2d63f2d6416b091927c5d56f566542a1a7d0639797e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Dec 05 10:20:02 compute-0 podman[271682]: 2025-12-05 10:20:02.790361504 +0000 UTC m=+0.175763017 container attach ced467ed2f5f33172190e2d63f2d6416b091927c5d56f566542a1a7d0639797e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_torvalds, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 10:20:02 compute-0 mystifying_torvalds[271698]: 167 167
Dec 05 10:20:02 compute-0 systemd[1]: libpod-ced467ed2f5f33172190e2d63f2d6416b091927c5d56f566542a1a7d0639797e.scope: Deactivated successfully.
Dec 05 10:20:02 compute-0 podman[271682]: 2025-12-05 10:20:02.793736896 +0000 UTC m=+0.179138409 container died ced467ed2f5f33172190e2d63f2d6416b091927c5d56f566542a1a7d0639797e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 05 10:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-63aede739258d1b47855654045358c3d12b78cac0ef7331edc9657b08f96bd1e-merged.mount: Deactivated successfully.
Dec 05 10:20:02 compute-0 podman[271682]: 2025-12-05 10:20:02.836100148 +0000 UTC m=+0.221501661 container remove ced467ed2f5f33172190e2d63f2d6416b091927c5d56f566542a1a7d0639797e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 10:20:02 compute-0 systemd[1]: libpod-conmon-ced467ed2f5f33172190e2d63f2d6416b091927c5d56f566542a1a7d0639797e.scope: Deactivated successfully.
Dec 05 10:20:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:03 compute-0 podman[271722]: 2025-12-05 10:20:03.062569503 +0000 UTC m=+0.096410202 container create dd9cee8dd1cd2f4dc26500c0dfef7e247bf18c3c6ab26e7347a5d80e3ea7295a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Dec 05 10:20:03 compute-0 podman[271722]: 2025-12-05 10:20:02.989830056 +0000 UTC m=+0.023670735 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:20:03 compute-0 systemd[1]: Started libpod-conmon-dd9cee8dd1cd2f4dc26500c0dfef7e247bf18c3c6ab26e7347a5d80e3ea7295a.scope.
Dec 05 10:20:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eecccf45814e32222926f232bdc07ad56829192747fefd57b2b3232966cc5c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eecccf45814e32222926f232bdc07ad56829192747fefd57b2b3232966cc5c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eecccf45814e32222926f232bdc07ad56829192747fefd57b2b3232966cc5c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eecccf45814e32222926f232bdc07ad56829192747fefd57b2b3232966cc5c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:03 compute-0 podman[271722]: 2025-12-05 10:20:03.162060276 +0000 UTC m=+0.195900935 container init dd9cee8dd1cd2f4dc26500c0dfef7e247bf18c3c6ab26e7347a5d80e3ea7295a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:20:03 compute-0 podman[271722]: 2025-12-05 10:20:03.168879922 +0000 UTC m=+0.202720581 container start dd9cee8dd1cd2f4dc26500c0dfef7e247bf18c3c6ab26e7347a5d80e3ea7295a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:20:03 compute-0 podman[271722]: 2025-12-05 10:20:03.173032224 +0000 UTC m=+0.206872903 container attach dd9cee8dd1cd2f4dc26500c0dfef7e247bf18c3c6ab26e7347a5d80e3ea7295a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 10:20:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3331770505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:20:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/398674922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:20:03 compute-0 zen_darwin[271738]: {
Dec 05 10:20:03 compute-0 zen_darwin[271738]:     "1": [
Dec 05 10:20:03 compute-0 zen_darwin[271738]:         {
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             "devices": [
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "/dev/loop3"
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             ],
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             "lv_name": "ceph_lv0",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             "lv_size": "21470642176",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             "name": "ceph_lv0",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             "tags": {
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.cluster_name": "ceph",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.crush_device_class": "",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.encrypted": "0",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.osd_id": "1",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.type": "block",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.vdo": "0",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:                 "ceph.with_tpm": "0"
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             },
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             "type": "block",
Dec 05 10:20:03 compute-0 zen_darwin[271738]:             "vg_name": "ceph_vg0"
Dec 05 10:20:03 compute-0 zen_darwin[271738]:         }
Dec 05 10:20:03 compute-0 zen_darwin[271738]:     ]
Dec 05 10:20:03 compute-0 zen_darwin[271738]: }
Dec 05 10:20:03 compute-0 systemd[1]: libpod-dd9cee8dd1cd2f4dc26500c0dfef7e247bf18c3c6ab26e7347a5d80e3ea7295a.scope: Deactivated successfully.
Dec 05 10:20:03 compute-0 nova_compute[257087]: 2025-12-05 10:20:03.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:03 compute-0 podman[271747]: 2025-12-05 10:20:03.560974227 +0000 UTC m=+0.043060101 container died dd9cee8dd1cd2f4dc26500c0dfef7e247bf18c3c6ab26e7347a5d80e3ea7295a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eecccf45814e32222926f232bdc07ad56829192747fefd57b2b3232966cc5c9-merged.mount: Deactivated successfully.
Dec 05 10:20:03 compute-0 nova_compute[257087]: 2025-12-05 10:20:03.590 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:20:03 compute-0 nova_compute[257087]: 2025-12-05 10:20:03.591 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:20:03 compute-0 nova_compute[257087]: 2025-12-05 10:20:03.591 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:20:03 compute-0 nova_compute[257087]: 2025-12-05 10:20:03.591 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:20:03 compute-0 nova_compute[257087]: 2025-12-05 10:20:03.592 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:20:03 compute-0 podman[271747]: 2025-12-05 10:20:03.608647914 +0000 UTC m=+0.090733738 container remove dd9cee8dd1cd2f4dc26500c0dfef7e247bf18c3c6ab26e7347a5d80e3ea7295a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:20:03 compute-0 systemd[1]: libpod-conmon-dd9cee8dd1cd2f4dc26500c0dfef7e247bf18c3c6ab26e7347a5d80e3ea7295a.scope: Deactivated successfully.
Dec 05 10:20:03 compute-0 sudo[271615]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:03.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:03.710Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:20:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:03.711Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:20:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:03.712Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:03 compute-0 sudo[271763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:20:03 compute-0 sudo[271763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:20:03 compute-0 sudo[271763]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:03 compute-0 sudo[271807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:20:03 compute-0 sudo[271807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:20:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:03.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Dec 05 10:20:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:20:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255262797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.083 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.299 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.301 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4557MB free_disk=59.942588806152344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.301 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.301 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:20:04 compute-0 podman[271878]: 2025-12-05 10:20:04.332375923 +0000 UTC m=+0.052611931 container create 9b37f484c00a6a8e0f224a1da01c1f8cfb5b12fc8aaaca24598b960905bc96da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_buck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.359 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.360 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:20:04 compute-0 systemd[1]: Started libpod-conmon-9b37f484c00a6a8e0f224a1da01c1f8cfb5b12fc8aaaca24598b960905bc96da.scope.
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.376 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:20:04 compute-0 podman[271878]: 2025-12-05 10:20:04.311520456 +0000 UTC m=+0.031756494 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:20:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:20:04 compute-0 podman[271878]: 2025-12-05 10:20:04.431290821 +0000 UTC m=+0.151526849 container init 9b37f484c00a6a8e0f224a1da01c1f8cfb5b12fc8aaaca24598b960905bc96da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:20:04 compute-0 podman[271878]: 2025-12-05 10:20:04.438927959 +0000 UTC m=+0.159163967 container start 9b37f484c00a6a8e0f224a1da01c1f8cfb5b12fc8aaaca24598b960905bc96da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_buck, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 10:20:04 compute-0 podman[271878]: 2025-12-05 10:20:04.442437584 +0000 UTC m=+0.162673632 container attach 9b37f484c00a6a8e0f224a1da01c1f8cfb5b12fc8aaaca24598b960905bc96da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_buck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 10:20:04 compute-0 ceph-mon[74418]: pgmap v927: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Dec 05 10:20:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1255262797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:20:04 compute-0 strange_buck[271895]: 167 167
Dec 05 10:20:04 compute-0 podman[271878]: 2025-12-05 10:20:04.447759869 +0000 UTC m=+0.167995877 container died 9b37f484c00a6a8e0f224a1da01c1f8cfb5b12fc8aaaca24598b960905bc96da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:20:04 compute-0 systemd[1]: libpod-9b37f484c00a6a8e0f224a1da01c1f8cfb5b12fc8aaaca24598b960905bc96da.scope: Deactivated successfully.
Dec 05 10:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2b2a250353d571ad96fbddeb21381d03a38098a575e7b98973154ac01030b39-merged.mount: Deactivated successfully.
Dec 05 10:20:04 compute-0 podman[271878]: 2025-12-05 10:20:04.493968995 +0000 UTC m=+0.214205003 container remove 9b37f484c00a6a8e0f224a1da01c1f8cfb5b12fc8aaaca24598b960905bc96da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:20:04 compute-0 systemd[1]: libpod-conmon-9b37f484c00a6a8e0f224a1da01c1f8cfb5b12fc8aaaca24598b960905bc96da.scope: Deactivated successfully.
Dec 05 10:20:04 compute-0 podman[271940]: 2025-12-05 10:20:04.670433841 +0000 UTC m=+0.050470473 container create 4aed743df8ab835b7bbb8e9f58ea46718d14db3009936d1a04b646a423232e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_faraday, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 05 10:20:04 compute-0 systemd[1]: Started libpod-conmon-4aed743df8ab835b7bbb8e9f58ea46718d14db3009936d1a04b646a423232e97.scope.
Dec 05 10:20:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeb18bd048ca4c014b49c5cd0d4112585622f88c63f832f73323569bfb5676a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeb18bd048ca4c014b49c5cd0d4112585622f88c63f832f73323569bfb5676a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeb18bd048ca4c014b49c5cd0d4112585622f88c63f832f73323569bfb5676a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeb18bd048ca4c014b49c5cd0d4112585622f88c63f832f73323569bfb5676a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:20:04 compute-0 podman[271940]: 2025-12-05 10:20:04.653745337 +0000 UTC m=+0.033781989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:20:04 compute-0 podman[271940]: 2025-12-05 10:20:04.751326988 +0000 UTC m=+0.131363620 container init 4aed743df8ab835b7bbb8e9f58ea46718d14db3009936d1a04b646a423232e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:20:04 compute-0 podman[271940]: 2025-12-05 10:20:04.763568352 +0000 UTC m=+0.143604984 container start 4aed743df8ab835b7bbb8e9f58ea46718d14db3009936d1a04b646a423232e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_faraday, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:20:04 compute-0 podman[271940]: 2025-12-05 10:20:04.767157959 +0000 UTC m=+0.147194681 container attach 4aed743df8ab835b7bbb8e9f58ea46718d14db3009936d1a04b646a423232e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 10:20:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:20:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639306543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.868 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.876 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.898 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.901 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:20:04 compute-0 nova_compute[257087]: 2025-12-05 10:20:04.902 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:20:05 compute-0 lvm[272033]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:20:05 compute-0 lvm[272033]: VG ceph_vg0 finished
Dec 05 10:20:05 compute-0 happy_faraday[271956]: {}
Dec 05 10:20:05 compute-0 systemd[1]: libpod-4aed743df8ab835b7bbb8e9f58ea46718d14db3009936d1a04b646a423232e97.scope: Deactivated successfully.
Dec 05 10:20:05 compute-0 podman[271940]: 2025-12-05 10:20:05.536007315 +0000 UTC m=+0.916043967 container died 4aed743df8ab835b7bbb8e9f58ea46718d14db3009936d1a04b646a423232e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:20:05 compute-0 systemd[1]: libpod-4aed743df8ab835b7bbb8e9f58ea46718d14db3009936d1a04b646a423232e97.scope: Consumed 1.230s CPU time.
Dec 05 10:20:05 compute-0 nova_compute[257087]: 2025-12-05 10:20:05.607 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:05] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec 05 10:20:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:05] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec 05 10:20:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:05.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:20:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:05.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:20:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 2.8 KiB/s wr, 7 op/s
Dec 05 10:20:05 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/160894111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:20:05 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3639306543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:20:05 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3217143384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:20:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aeb18bd048ca4c014b49c5cd0d4112585622f88c63f832f73323569bfb5676a-merged.mount: Deactivated successfully.
Dec 05 10:20:05 compute-0 podman[271940]: 2025-12-05 10:20:05.990954499 +0000 UTC m=+1.370991131 container remove 4aed743df8ab835b7bbb8e9f58ea46718d14db3009936d1a04b646a423232e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 10:20:06 compute-0 systemd[1]: libpod-conmon-4aed743df8ab835b7bbb8e9f58ea46718d14db3009936d1a04b646a423232e97.scope: Deactivated successfully.
Dec 05 10:20:06 compute-0 nova_compute[257087]: 2025-12-05 10:20:06.034 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:06 compute-0 sudo[271807]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:20:06 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:20:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:20:06 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:20:06 compute-0 sudo[272048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:20:06 compute-0 sudo[272048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:20:06 compute-0 sudo[272048]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:06 compute-0 nova_compute[257087]: 2025-12-05 10:20:06.899 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:07.412Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:07 compute-0 nova_compute[257087]: 2025-12-05 10:20:07.492 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:07 compute-0 nova_compute[257087]: 2025-12-05 10:20:07.492 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:20:07 compute-0 nova_compute[257087]: 2025-12-05 10:20:07.493 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:20:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:07.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:07 compute-0 ceph-mon[74418]: pgmap v928: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 2.8 KiB/s wr, 7 op/s
Dec 05 10:20:07 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:20:07 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:20:07 compute-0 nova_compute[257087]: 2025-12-05 10:20:07.812 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:20:07 compute-0 nova_compute[257087]: 2025-12-05 10:20:07.812 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:07 compute-0 nova_compute[257087]: 2025-12-05 10:20:07.813 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:07 compute-0 nova_compute[257087]: 2025-12-05 10:20:07.813 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:20:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:07.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Dec 05 10:20:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:08 compute-0 ceph-mon[74418]: pgmap v929: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Dec 05 10:20:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:09.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:09.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Dec 05 10:20:10 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:20:10.249 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:20:10 compute-0 nova_compute[257087]: 2025-12-05 10:20:10.611 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:11 compute-0 ceph-mon[74418]: pgmap v930: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Dec 05 10:20:11 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2316898219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:20:11 compute-0 nova_compute[257087]: 2025-12-05 10:20:11.036 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:11.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:11.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 05 10:20:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:20:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:20:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:13 compute-0 ceph-mon[74418]: pgmap v931: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 05 10:20:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:20:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:13.713Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:13.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:13.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Dec 05 10:20:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:14 compute-0 sudo[272080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:20:14 compute-0 sudo[272080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:20:14 compute-0 sudo[272080]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:14 compute-0 ceph-mon[74418]: pgmap v932: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Dec 05 10:20:15 compute-0 nova_compute[257087]: 2025-12-05 10:20:15.613 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:15] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:20:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:15] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:20:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:15.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:20:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:15.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:20:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 05 10:20:16 compute-0 nova_compute[257087]: 2025-12-05 10:20:16.047 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:16 compute-0 ceph-mon[74418]: pgmap v933: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 05 10:20:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:17.415Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:17.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:17.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 05 10:20:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:18 compute-0 ceph-mon[74418]: pgmap v934: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 05 10:20:19 compute-0 podman[272112]: 2025-12-05 10:20:19.425393257 +0000 UTC m=+0.079494192 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Dec 05 10:20:19 compute-0 podman[272111]: 2025-12-05 10:20:19.446252644 +0000 UTC m=+0.100189575 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 10:20:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:19.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:19.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:20:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:20:20.577 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:20:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:20:20.578 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:20:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:20:20.578 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:20:20 compute-0 nova_compute[257087]: 2025-12-05 10:20:20.666 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:21 compute-0 nova_compute[257087]: 2025-12-05 10:20:21.051 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:21 compute-0 ceph-mon[74418]: pgmap v935: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:20:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:21.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:21.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:22 compute-0 podman[272152]: 2025-12-05 10:20:22.439293727 +0000 UTC m=+0.096348519 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 10:20:22 compute-0 ceph-mon[74418]: pgmap v936: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:23.714Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:23.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:20:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:23.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:24 compute-0 ceph-mon[74418]: pgmap v937: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:20:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:25] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:20:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:25] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:20:25 compute-0 nova_compute[257087]: 2025-12-05 10:20:25.669 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:25.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:25.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:26 compute-0 nova_compute[257087]: 2025-12-05 10:20:26.090 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:27 compute-0 ceph-mon[74418]: pgmap v938: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:27.416Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:20:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:27.417Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:20:27
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.rgw.root', '.nfs', 'default.rgw.control', '.mgr']
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:20:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:27.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:20:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:20:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:27.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:20:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:20:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:20:28 compute-0 ceph-mon[74418]: pgmap v939: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.000768) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930029000937, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1741, "num_deletes": 506, "total_data_size": 2822822, "memory_usage": 2861296, "flush_reason": "Manual Compaction"}
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930029022705, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2147179, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27019, "largest_seqno": 28759, "table_properties": {"data_size": 2140549, "index_size": 3188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18297, "raw_average_key_size": 19, "raw_value_size": 2124716, "raw_average_value_size": 2246, "num_data_blocks": 138, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929884, "oldest_key_time": 1764929884, "file_creation_time": 1764930029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 22014 microseconds, and 9249 cpu microseconds.
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.022794) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2147179 bytes OK
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.022834) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.027589) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.027620) EVENT_LOG_v1 {"time_micros": 1764930029027610, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.027648) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2814401, prev total WAL file size 2814401, number of live WAL files 2.
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.029006) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2096KB)], [59(14MB)]
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930029029159, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17298534, "oldest_snapshot_seqno": -1}
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5930 keys, 13627722 bytes, temperature: kUnknown
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930029189997, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 13627722, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13588820, "index_size": 22984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 152779, "raw_average_key_size": 25, "raw_value_size": 13482322, "raw_average_value_size": 2273, "num_data_blocks": 924, "num_entries": 5930, "num_filter_entries": 5930, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764930029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.191771) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 13627722 bytes
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.193677) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.5 rd, 83.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 14.4 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(14.4) write-amplify(6.3) OK, records in: 6920, records dropped: 990 output_compression: NoCompression
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.193697) EVENT_LOG_v1 {"time_micros": 1764930029193688, "job": 32, "event": "compaction_finished", "compaction_time_micros": 162386, "compaction_time_cpu_micros": 46089, "output_level": 6, "num_output_files": 1, "total_output_size": 13627722, "num_input_records": 6920, "num_output_records": 5930, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930029194809, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930029197839, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.028815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.198079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.198094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.198098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.198101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:20:29 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:20:29.198104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:20:29 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Check health
Dec 05 10:20:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:29.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:20:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:29.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:30 compute-0 nova_compute[257087]: 2025-12-05 10:20:30.671 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:31 compute-0 nova_compute[257087]: 2025-12-05 10:20:31.130 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:31 compute-0 ceph-mon[74418]: pgmap v940: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:20:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:31.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:31.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:32 compute-0 ceph-mon[74418]: pgmap v941: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:33.715Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:33.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:20:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:33.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:34 compute-0 sudo[272188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:20:34 compute-0 sudo[272188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:20:34 compute-0 sudo[272188]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:35 compute-0 ceph-mon[74418]: pgmap v942: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:20:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:35] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:20:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:35] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:20:35 compute-0 nova_compute[257087]: 2025-12-05 10:20:35.673 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:35.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:35.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:36 compute-0 nova_compute[257087]: 2025-12-05 10:20:36.131 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:37 compute-0 ceph-mon[74418]: pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1149135402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:20:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:37.418Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:37.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:37.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:39.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:20:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:39.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:40 compute-0 nova_compute[257087]: 2025-12-05 10:20:40.766 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:41 compute-0 nova_compute[257087]: 2025-12-05 10:20:41.134 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:41.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:20:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:41.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:41 compute-0 ceph-mon[74418]: pgmap v944: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:20:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:20:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:20:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:43 compute-0 ceph-mon[74418]: pgmap v945: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:20:43 compute-0 ceph-mon[74418]: pgmap v946: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:20:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:20:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:43.717Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:43.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:20:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:43.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3828842214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:20:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2465316392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:20:44 compute-0 ceph-mon[74418]: pgmap v947: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:20:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:45] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec 05 10:20:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:45] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec 05 10:20:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:45.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:45 compute-0 nova_compute[257087]: 2025-12-05 10:20:45.768 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:20:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:45.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:46 compute-0 nova_compute[257087]: 2025-12-05 10:20:46.176 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:47.418Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:47.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:20:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:47.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:48 compute-0 ceph-mon[74418]: pgmap v948: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:20:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:49.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 05 10:20:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:49.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:50 compute-0 ceph-mon[74418]: pgmap v949: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:20:50 compute-0 podman[272230]: 2025-12-05 10:20:50.438710808 +0000 UTC m=+0.077290001 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:20:50 compute-0 podman[272231]: 2025-12-05 10:20:50.48845955 +0000 UTC m=+0.126758765 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:20:50 compute-0 nova_compute[257087]: 2025-12-05 10:20:50.769 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:51 compute-0 nova_compute[257087]: 2025-12-05 10:20:51.223 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:51 compute-0 ceph-mon[74418]: pgmap v950: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 05 10:20:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 12 KiB/s wr, 5 op/s
Dec 05 10:20:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:51.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:52 compute-0 ceph-mon[74418]: pgmap v951: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 12 KiB/s wr, 5 op/s
Dec 05 10:20:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:53 compute-0 podman[272272]: 2025-12-05 10:20:53.43275623 +0000 UTC m=+0.101103959 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:20:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:53.718Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:53.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec 05 10:20:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:53.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:54 compute-0 sudo[272299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:20:54 compute-0 sudo[272299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:20:54 compute-0 sudo[272299]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:55 compute-0 ceph-mon[74418]: pgmap v952: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec 05 10:20:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:55] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec 05 10:20:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:20:55] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec 05 10:20:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:55.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:55 compute-0 nova_compute[257087]: 2025-12-05 10:20:55.772 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:20:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:55.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:56 compute-0 nova_compute[257087]: 2025-12-05 10:20:56.226 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:20:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1014529961' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:20:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:20:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1014529961' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:20:57 compute-0 ceph-mon[74418]: pgmap v953: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:20:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:20:57.422Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:20:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:20:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:20:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:20:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:20:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:20:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:20:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:20:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:20:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:57.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:20:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:57.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:20:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:20:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:20:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:20:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:20:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:20:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1014529961' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:20:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1014529961' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:20:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:20:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=cleanup t=2025-12-05T10:20:59.220510688Z level=info msg="Completed cleanup jobs" duration=34.911479ms
Dec 05 10:20:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=plugins.update.checker t=2025-12-05T10:20:59.324656278Z level=info msg="Update check succeeded" duration=57.811071ms
Dec 05 10:20:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=grafana.update.checker t=2025-12-05T10:20:59.325892962Z level=info msg="Update check succeeded" duration=48.970351ms
Dec 05 10:20:59 compute-0 ceph-mon[74418]: pgmap v954: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:20:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:20:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:20:59.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:20:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Dec 05 10:20:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:20:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:20:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:20:59.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:00 compute-0 ceph-mon[74418]: pgmap v955: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Dec 05 10:21:00 compute-0 nova_compute[257087]: 2025-12-05 10:21:00.775 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:01 compute-0 nova_compute[257087]: 2025-12-05 10:21:01.263 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:01 compute-0 nova_compute[257087]: 2025-12-05 10:21:01.531 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:01 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2432055038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:01.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 72 op/s
Dec 05 10:21:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:01.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:02 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:21:02.063 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:21:02 compute-0 nova_compute[257087]: 2025-12-05 10:21:02.065 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:02 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:21:02.066 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:21:02 compute-0 nova_compute[257087]: 2025-12-05 10:21:02.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:02 compute-0 nova_compute[257087]: 2025-12-05 10:21:02.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:02 compute-0 ceph-mon[74418]: pgmap v956: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 72 op/s
Dec 05 10:21:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:03.719Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:21:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:21:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:03.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:21:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 97 op/s
Dec 05 10:21:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:03.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:04 compute-0 nova_compute[257087]: 2025-12-05 10:21:04.526 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:04 compute-0 nova_compute[257087]: 2025-12-05 10:21:04.527 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:05 compute-0 ceph-mon[74418]: pgmap v957: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 97 op/s
Dec 05 10:21:05 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1515145464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:05 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:21:05.069 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:21:05 compute-0 nova_compute[257087]: 2025-12-05 10:21:05.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:05 compute-0 nova_compute[257087]: 2025-12-05 10:21:05.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:21:05 compute-0 nova_compute[257087]: 2025-12-05 10:21:05.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:05 compute-0 nova_compute[257087]: 2025-12-05 10:21:05.557 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:21:05 compute-0 nova_compute[257087]: 2025-12-05 10:21:05.558 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:21:05 compute-0 nova_compute[257087]: 2025-12-05 10:21:05.558 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:21:05 compute-0 nova_compute[257087]: 2025-12-05 10:21:05.558 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:21:05 compute-0 nova_compute[257087]: 2025-12-05 10:21:05.558 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:21:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:05] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:21:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:05] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:21:05 compute-0 nova_compute[257087]: 2025-12-05 10:21:05.777 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:05.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 10:21:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:05.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2391153986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:21:06 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/754771841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.067 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.246 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.248 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4622MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.248 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.248 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.265 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.336 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.337 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.359 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:21:06 compute-0 sudo[272379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:21:06 compute-0 sudo[272379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:06 compute-0 sudo[272379]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:06 compute-0 sudo[272404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:21:06 compute-0 sudo[272404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:21:06 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71783940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.930 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.937 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.954 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.956 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:21:06 compute-0 nova_compute[257087]: 2025-12-05 10:21:06.957 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:21:07 compute-0 ceph-mon[74418]: pgmap v958: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 10:21:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/754771841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2061284389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/71783940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:07 compute-0 sudo[272404]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:07 compute-0 sudo[272462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:21:07 compute-0 sudo[272462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:07 compute-0 sudo[272462]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:07.423Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:21:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:07.424Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:21:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:07.425Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:21:07 compute-0 sudo[272487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 05 10:21:07 compute-0 sudo[272487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:07 compute-0 sudo[272487]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:21:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:07.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:21:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:21:07 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:21:07 compute-0 sudo[272531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:21:07 compute-0 sudo[272531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:07 compute-0 sudo[272531]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 10:21:07 compute-0 sudo[272556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:21:07 compute-0 nova_compute[257087]: 2025-12-05 10:21:07.957 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:07 compute-0 nova_compute[257087]: 2025-12-05 10:21:07.958 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:21:07 compute-0 nova_compute[257087]: 2025-12-05 10:21:07.958 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:21:07 compute-0 sudo[272556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:07 compute-0 nova_compute[257087]: 2025-12-05 10:21:07.978 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:21:07 compute-0 nova_compute[257087]: 2025-12-05 10:21:07.979 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:21:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:07.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:21:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3676145072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:21:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:21:08 compute-0 podman[272622]: 2025-12-05 10:21:08.401933109 +0000 UTC m=+0.060992549 container create b8834071d814184bc1ea426bc1fa282aed0cf053f7aa144362f032626ff1d163 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:21:08 compute-0 systemd[1]: Started libpod-conmon-b8834071d814184bc1ea426bc1fa282aed0cf053f7aa144362f032626ff1d163.scope.
Dec 05 10:21:08 compute-0 podman[272622]: 2025-12-05 10:21:08.372917999 +0000 UTC m=+0.031977449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:21:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:21:08 compute-0 podman[272622]: 2025-12-05 10:21:08.506745317 +0000 UTC m=+0.165804767 container init b8834071d814184bc1ea426bc1fa282aed0cf053f7aa144362f032626ff1d163 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 10:21:08 compute-0 podman[272622]: 2025-12-05 10:21:08.515769122 +0000 UTC m=+0.174828552 container start b8834071d814184bc1ea426bc1fa282aed0cf053f7aa144362f032626ff1d163 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:21:08 compute-0 podman[272622]: 2025-12-05 10:21:08.519643958 +0000 UTC m=+0.178703418 container attach b8834071d814184bc1ea426bc1fa282aed0cf053f7aa144362f032626ff1d163 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 05 10:21:08 compute-0 affectionate_wilson[272640]: 167 167
Dec 05 10:21:08 compute-0 systemd[1]: libpod-b8834071d814184bc1ea426bc1fa282aed0cf053f7aa144362f032626ff1d163.scope: Deactivated successfully.
Dec 05 10:21:08 compute-0 podman[272622]: 2025-12-05 10:21:08.527191392 +0000 UTC m=+0.186250842 container died b8834071d814184bc1ea426bc1fa282aed0cf053f7aa144362f032626ff1d163 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 10:21:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-364acb3439ca905ee1ff9fe0204aa9beade0e1550ce77a6162987db6d51f8e5b-merged.mount: Deactivated successfully.
Dec 05 10:21:08 compute-0 podman[272622]: 2025-12-05 10:21:08.578351983 +0000 UTC m=+0.237411413 container remove b8834071d814184bc1ea426bc1fa282aed0cf053f7aa144362f032626ff1d163 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 10:21:08 compute-0 systemd[1]: libpod-conmon-b8834071d814184bc1ea426bc1fa282aed0cf053f7aa144362f032626ff1d163.scope: Deactivated successfully.
Dec 05 10:21:08 compute-0 podman[272664]: 2025-12-05 10:21:08.766337182 +0000 UTC m=+0.058127851 container create a70691ce3a666ef9989e1d9b909e924c41e6840687326f6c37bf0de22be1f1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_matsumoto, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 10:21:08 compute-0 systemd[1]: Started libpod-conmon-a70691ce3a666ef9989e1d9b909e924c41e6840687326f6c37bf0de22be1f1cc.scope.
Dec 05 10:21:08 compute-0 podman[272664]: 2025-12-05 10:21:08.736529832 +0000 UTC m=+0.028320541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:21:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb6b18ff8687023cd119de400438fae9e762a69fde389ae7396cf62eea42f58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb6b18ff8687023cd119de400438fae9e762a69fde389ae7396cf62eea42f58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb6b18ff8687023cd119de400438fae9e762a69fde389ae7396cf62eea42f58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb6b18ff8687023cd119de400438fae9e762a69fde389ae7396cf62eea42f58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb6b18ff8687023cd119de400438fae9e762a69fde389ae7396cf62eea42f58/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:08 compute-0 podman[272664]: 2025-12-05 10:21:08.871420118 +0000 UTC m=+0.163210797 container init a70691ce3a666ef9989e1d9b909e924c41e6840687326f6c37bf0de22be1f1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:21:08 compute-0 podman[272664]: 2025-12-05 10:21:08.878289955 +0000 UTC m=+0.170080634 container start a70691ce3a666ef9989e1d9b909e924c41e6840687326f6c37bf0de22be1f1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:21:08 compute-0 podman[272664]: 2025-12-05 10:21:08.882137849 +0000 UTC m=+0.173928528 container attach a70691ce3a666ef9989e1d9b909e924c41e6840687326f6c37bf0de22be1f1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_matsumoto, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:21:09 compute-0 ceph-mon[74418]: pgmap v959: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 10:21:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3179038112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:09 compute-0 loving_matsumoto[272680]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:21:09 compute-0 loving_matsumoto[272680]: --> All data devices are unavailable
Dec 05 10:21:09 compute-0 systemd[1]: libpod-a70691ce3a666ef9989e1d9b909e924c41e6840687326f6c37bf0de22be1f1cc.scope: Deactivated successfully.
Dec 05 10:21:09 compute-0 podman[272664]: 2025-12-05 10:21:09.273529767 +0000 UTC m=+0.565320476 container died a70691ce3a666ef9989e1d9b909e924c41e6840687326f6c37bf0de22be1f1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_matsumoto, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 10:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fb6b18ff8687023cd119de400438fae9e762a69fde389ae7396cf62eea42f58-merged.mount: Deactivated successfully.
Dec 05 10:21:09 compute-0 podman[272664]: 2025-12-05 10:21:09.339805767 +0000 UTC m=+0.631596456 container remove a70691ce3a666ef9989e1d9b909e924c41e6840687326f6c37bf0de22be1f1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_matsumoto, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:21:09 compute-0 systemd[1]: libpod-conmon-a70691ce3a666ef9989e1d9b909e924c41e6840687326f6c37bf0de22be1f1cc.scope: Deactivated successfully.
Dec 05 10:21:09 compute-0 sudo[272556]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:09 compute-0 sudo[272707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:21:09 compute-0 sudo[272707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:09 compute-0 sudo[272707]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:09 compute-0 sudo[272732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:21:09 compute-0 sudo[272732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:09.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 10:21:09 compute-0 podman[272796]: 2025-12-05 10:21:09.954168095 +0000 UTC m=+0.044354337 container create 58d7ea34f2baa7faf3d2c4ab1f72c7ef4ee5ca44c46f7c2b28f663235e084ac0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcnulty, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:21:09 compute-0 systemd[1]: Started libpod-conmon-58d7ea34f2baa7faf3d2c4ab1f72c7ef4ee5ca44c46f7c2b28f663235e084ac0.scope.
Dec 05 10:21:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:09.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:10 compute-0 podman[272796]: 2025-12-05 10:21:09.937331077 +0000 UTC m=+0.027517339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:21:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:21:10 compute-0 podman[272796]: 2025-12-05 10:21:10.087066516 +0000 UTC m=+0.177252778 container init 58d7ea34f2baa7faf3d2c4ab1f72c7ef4ee5ca44c46f7c2b28f663235e084ac0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcnulty, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:21:10 compute-0 podman[272796]: 2025-12-05 10:21:10.094957711 +0000 UTC m=+0.185143953 container start 58d7ea34f2baa7faf3d2c4ab1f72c7ef4ee5ca44c46f7c2b28f663235e084ac0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcnulty, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 10:21:10 compute-0 podman[272796]: 2025-12-05 10:21:10.098485827 +0000 UTC m=+0.188672059 container attach 58d7ea34f2baa7faf3d2c4ab1f72c7ef4ee5ca44c46f7c2b28f663235e084ac0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcnulty, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Dec 05 10:21:10 compute-0 hopeful_mcnulty[272812]: 167 167
Dec 05 10:21:10 compute-0 systemd[1]: libpod-58d7ea34f2baa7faf3d2c4ab1f72c7ef4ee5ca44c46f7c2b28f663235e084ac0.scope: Deactivated successfully.
Dec 05 10:21:10 compute-0 conmon[272812]: conmon 58d7ea34f2baa7faf3d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58d7ea34f2baa7faf3d2c4ab1f72c7ef4ee5ca44c46f7c2b28f663235e084ac0.scope/container/memory.events
Dec 05 10:21:10 compute-0 podman[272796]: 2025-12-05 10:21:10.103816542 +0000 UTC m=+0.194002784 container died 58d7ea34f2baa7faf3d2c4ab1f72c7ef4ee5ca44c46f7c2b28f663235e084ac0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6174abc1909899732181bf6957efaf57e766b98fdce2c4629c3c6068df34bc3a-merged.mount: Deactivated successfully.
Dec 05 10:21:10 compute-0 podman[272796]: 2025-12-05 10:21:10.142882723 +0000 UTC m=+0.233068995 container remove 58d7ea34f2baa7faf3d2c4ab1f72c7ef4ee5ca44c46f7c2b28f663235e084ac0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 10:21:10 compute-0 systemd[1]: libpod-conmon-58d7ea34f2baa7faf3d2c4ab1f72c7ef4ee5ca44c46f7c2b28f663235e084ac0.scope: Deactivated successfully.
Dec 05 10:21:10 compute-0 podman[272837]: 2025-12-05 10:21:10.320915632 +0000 UTC m=+0.047082660 container create 04bebe583255e9e763596e2193a6f9045f184be2fa00ca0266453320f3ab8824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lumiere, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:21:10 compute-0 systemd[1]: Started libpod-conmon-04bebe583255e9e763596e2193a6f9045f184be2fa00ca0266453320f3ab8824.scope.
Dec 05 10:21:10 compute-0 podman[272837]: 2025-12-05 10:21:10.299987523 +0000 UTC m=+0.026154571 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:21:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83282d84969437b6c56ac074c4b1271dad27cae65c1b37ba9a26043ab1c7c4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83282d84969437b6c56ac074c4b1271dad27cae65c1b37ba9a26043ab1c7c4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83282d84969437b6c56ac074c4b1271dad27cae65c1b37ba9a26043ab1c7c4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83282d84969437b6c56ac074c4b1271dad27cae65c1b37ba9a26043ab1c7c4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:10 compute-0 podman[272837]: 2025-12-05 10:21:10.419685626 +0000 UTC m=+0.145852684 container init 04bebe583255e9e763596e2193a6f9045f184be2fa00ca0266453320f3ab8824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lumiere, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 10:21:10 compute-0 podman[272837]: 2025-12-05 10:21:10.426316126 +0000 UTC m=+0.152483154 container start 04bebe583255e9e763596e2193a6f9045f184be2fa00ca0266453320f3ab8824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lumiere, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 10:21:10 compute-0 podman[272837]: 2025-12-05 10:21:10.432298289 +0000 UTC m=+0.158465317 container attach 04bebe583255e9e763596e2193a6f9045f184be2fa00ca0266453320f3ab8824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lumiere, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 10:21:10 compute-0 boring_lumiere[272853]: {
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:     "1": [
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:         {
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             "devices": [
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "/dev/loop3"
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             ],
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             "lv_name": "ceph_lv0",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             "lv_size": "21470642176",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             "name": "ceph_lv0",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             "tags": {
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.cluster_name": "ceph",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.crush_device_class": "",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.encrypted": "0",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.osd_id": "1",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.type": "block",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.vdo": "0",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:                 "ceph.with_tpm": "0"
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             },
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             "type": "block",
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:             "vg_name": "ceph_vg0"
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:         }
Dec 05 10:21:10 compute-0 boring_lumiere[272853]:     ]
Dec 05 10:21:10 compute-0 boring_lumiere[272853]: }
Dec 05 10:21:10 compute-0 systemd[1]: libpod-04bebe583255e9e763596e2193a6f9045f184be2fa00ca0266453320f3ab8824.scope: Deactivated successfully.
Dec 05 10:21:10 compute-0 podman[272837]: 2025-12-05 10:21:10.729889337 +0000 UTC m=+0.456056365 container died 04bebe583255e9e763596e2193a6f9045f184be2fa00ca0266453320f3ab8824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lumiere, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 10:21:10 compute-0 nova_compute[257087]: 2025-12-05 10:21:10.779 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:11 compute-0 ceph-mon[74418]: pgmap v960: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 10:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b83282d84969437b6c56ac074c4b1271dad27cae65c1b37ba9a26043ab1c7c4f-merged.mount: Deactivated successfully.
Dec 05 10:21:11 compute-0 podman[272837]: 2025-12-05 10:21:11.270648974 +0000 UTC m=+0.996816002 container remove 04bebe583255e9e763596e2193a6f9045f184be2fa00ca0266453320f3ab8824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lumiere, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:21:11 compute-0 nova_compute[257087]: 2025-12-05 10:21:11.314 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:11 compute-0 systemd[1]: libpod-conmon-04bebe583255e9e763596e2193a6f9045f184be2fa00ca0266453320f3ab8824.scope: Deactivated successfully.
Dec 05 10:21:11 compute-0 sudo[272732]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:11 compute-0 sudo[272877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:21:11 compute-0 sudo[272877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:11 compute-0 sudo[272877]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:11 compute-0 sudo[272902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:21:11 compute-0 sudo[272902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:11.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Dec 05 10:21:11 compute-0 podman[272966]: 2025-12-05 10:21:11.949850962 +0000 UTC m=+0.064119693 container create f596018a86e09ce58ec1a2fd0ed36ce499ed34d9314ba1c282d660c3b67e73d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Dec 05 10:21:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:12.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:12 compute-0 podman[272966]: 2025-12-05 10:21:11.909324191 +0000 UTC m=+0.023592942 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:21:12 compute-0 systemd[1]: Started libpod-conmon-f596018a86e09ce58ec1a2fd0ed36ce499ed34d9314ba1c282d660c3b67e73d1.scope.
Dec 05 10:21:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:21:12 compute-0 podman[272966]: 2025-12-05 10:21:12.064480928 +0000 UTC m=+0.178749679 container init f596018a86e09ce58ec1a2fd0ed36ce499ed34d9314ba1c282d660c3b67e73d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:21:12 compute-0 podman[272966]: 2025-12-05 10:21:12.073754249 +0000 UTC m=+0.188022980 container start f596018a86e09ce58ec1a2fd0ed36ce499ed34d9314ba1c282d660c3b67e73d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 05 10:21:12 compute-0 podman[272966]: 2025-12-05 10:21:12.077264356 +0000 UTC m=+0.191533117 container attach f596018a86e09ce58ec1a2fd0ed36ce499ed34d9314ba1c282d660c3b67e73d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 05 10:21:12 compute-0 pedantic_bhaskara[272982]: 167 167
Dec 05 10:21:12 compute-0 systemd[1]: libpod-f596018a86e09ce58ec1a2fd0ed36ce499ed34d9314ba1c282d660c3b67e73d1.scope: Deactivated successfully.
Dec 05 10:21:12 compute-0 podman[272966]: 2025-12-05 10:21:12.079749072 +0000 UTC m=+0.194017833 container died f596018a86e09ce58ec1a2fd0ed36ce499ed34d9314ba1c282d660c3b67e73d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a551a380f902660e193c62d116d197d7fa33f23ae79d1341115379d342f0240f-merged.mount: Deactivated successfully.
Dec 05 10:21:12 compute-0 podman[272966]: 2025-12-05 10:21:12.121189409 +0000 UTC m=+0.235458140 container remove f596018a86e09ce58ec1a2fd0ed36ce499ed34d9314ba1c282d660c3b67e73d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Dec 05 10:21:12 compute-0 systemd[1]: libpod-conmon-f596018a86e09ce58ec1a2fd0ed36ce499ed34d9314ba1c282d660c3b67e73d1.scope: Deactivated successfully.
Dec 05 10:21:12 compute-0 podman[273005]: 2025-12-05 10:21:12.317488964 +0000 UTC m=+0.050508534 container create 290a0a375bc578cb1f11c601384385d0952d35f18e692baf56aabe8e04b22b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_faraday, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:21:12 compute-0 systemd[1]: Started libpod-conmon-290a0a375bc578cb1f11c601384385d0952d35f18e692baf56aabe8e04b22b25.scope.
Dec 05 10:21:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:21:12 compute-0 podman[273005]: 2025-12-05 10:21:12.298672143 +0000 UTC m=+0.031691733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02376d24a49f9379878920896505ecfefc62ea061934bd0224c8abdf8f18109e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02376d24a49f9379878920896505ecfefc62ea061934bd0224c8abdf8f18109e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02376d24a49f9379878920896505ecfefc62ea061934bd0224c8abdf8f18109e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02376d24a49f9379878920896505ecfefc62ea061934bd0224c8abdf8f18109e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:21:12 compute-0 podman[273005]: 2025-12-05 10:21:12.410633616 +0000 UTC m=+0.143653206 container init 290a0a375bc578cb1f11c601384385d0952d35f18e692baf56aabe8e04b22b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 10:21:12 compute-0 podman[273005]: 2025-12-05 10:21:12.4177878 +0000 UTC m=+0.150807370 container start 290a0a375bc578cb1f11c601384385d0952d35f18e692baf56aabe8e04b22b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 05 10:21:12 compute-0 podman[273005]: 2025-12-05 10:21:12.422082557 +0000 UTC m=+0.155102137 container attach 290a0a375bc578cb1f11c601384385d0952d35f18e692baf56aabe8e04b22b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_faraday, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:21:12 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/820119424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:21:12 compute-0 ceph-mon[74418]: pgmap v961: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Dec 05 10:21:12 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3943032109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:21:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:21:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:21:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:13 compute-0 lvm[273097]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:21:13 compute-0 lvm[273097]: VG ceph_vg0 finished
Dec 05 10:21:13 compute-0 laughing_faraday[273022]: {}
Dec 05 10:21:13 compute-0 systemd[1]: libpod-290a0a375bc578cb1f11c601384385d0952d35f18e692baf56aabe8e04b22b25.scope: Deactivated successfully.
Dec 05 10:21:13 compute-0 podman[273005]: 2025-12-05 10:21:13.214399279 +0000 UTC m=+0.947418849 container died 290a0a375bc578cb1f11c601384385d0952d35f18e692baf56aabe8e04b22b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_faraday, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 10:21:13 compute-0 systemd[1]: libpod-290a0a375bc578cb1f11c601384385d0952d35f18e692baf56aabe8e04b22b25.scope: Consumed 1.367s CPU time.
Dec 05 10:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-02376d24a49f9379878920896505ecfefc62ea061934bd0224c8abdf8f18109e-merged.mount: Deactivated successfully.
Dec 05 10:21:13 compute-0 podman[273005]: 2025-12-05 10:21:13.261923991 +0000 UTC m=+0.994943561 container remove 290a0a375bc578cb1f11c601384385d0952d35f18e692baf56aabe8e04b22b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:21:13 compute-0 systemd[1]: libpod-conmon-290a0a375bc578cb1f11c601384385d0952d35f18e692baf56aabe8e04b22b25.scope: Deactivated successfully.
Dec 05 10:21:13 compute-0 sudo[272902]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:21:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:21:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:13 compute-0 sudo[273115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:21:13 compute-0 sudo[273115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:13 compute-0 sudo[273115]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:21:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:21:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:13.719Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:21:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:13.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Dec 05 10:21:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:14.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:14 compute-0 sudo[273141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:21:14 compute-0 sudo[273141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:14 compute-0 sudo[273141]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:14 compute-0 ceph-mon[74418]: pgmap v962: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Dec 05 10:21:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:15] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec 05 10:21:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:15] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec 05 10:21:15 compute-0 nova_compute[257087]: 2025-12-05 10:21:15.782 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:15.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 05 10:21:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:16.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:16 compute-0 nova_compute[257087]: 2025-12-05 10:21:16.356 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:17 compute-0 ceph-mon[74418]: pgmap v963: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 05 10:21:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:17.426Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:21:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:17.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 05 10:21:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:18.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:19 compute-0 ceph-mon[74418]: pgmap v964: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 05 10:21:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:19.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Dec 05 10:21:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:20.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:20 compute-0 ceph-mon[74418]: pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Dec 05 10:21:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:21:20.578 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:21:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:21:20.579 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:21:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:21:20.579 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:21:20 compute-0 nova_compute[257087]: 2025-12-05 10:21:20.828 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/525004633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:21 compute-0 nova_compute[257087]: 2025-12-05 10:21:21.356 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:21 compute-0 podman[273173]: 2025-12-05 10:21:21.408413984 +0000 UTC m=+0.066745295 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:21:21 compute-0 podman[273174]: 2025-12-05 10:21:21.414272713 +0000 UTC m=+0.071748391 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:21:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:21.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Dec 05 10:21:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:22.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:22 compute-0 ceph-mon[74418]: pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Dec 05 10:21:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:23.722Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:21:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:23.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Dec 05 10:21:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:24.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:24 compute-0 podman[273215]: 2025-12-05 10:21:24.421163955 +0000 UTC m=+0.086329298 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:21:25 compute-0 ceph-mon[74418]: pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Dec 05 10:21:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:25] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec 05 10:21:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:25] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec 05 10:21:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:25.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:25 compute-0 nova_compute[257087]: 2025-12-05 10:21:25.832 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Dec 05 10:21:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:26.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:26 compute-0 nova_compute[257087]: 2025-12-05 10:21:26.359 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:26 compute-0 ceph-mon[74418]: pgmap v968: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Dec 05 10:21:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:27.427Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:21:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:27.428Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:21:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:21:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:21:27
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.nfs', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.meta', '.mgr', 'images']
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:21:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:27.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Dec 05 10:21:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:28.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:21:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:21:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:21:28 compute-0 ceph-mon[74418]: pgmap v969: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Dec 05 10:21:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:29.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Dec 05 10:21:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:30.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:30 compute-0 nova_compute[257087]: 2025-12-05 10:21:30.888 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:31 compute-0 ceph-mon[74418]: pgmap v970: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Dec 05 10:21:31 compute-0 nova_compute[257087]: 2025-12-05 10:21:31.361 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:31.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:32.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:32 compute-0 ceph-mon[74418]: pgmap v971: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:33.723Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:21:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:33.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:21:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:34.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:34 compute-0 sudo[273254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:21:34 compute-0 sudo[273254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:34 compute-0 sudo[273254]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:35 compute-0 ceph-mon[74418]: pgmap v972: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:21:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:35] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec 05 10:21:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:35] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec 05 10:21:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:35.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:35 compute-0 nova_compute[257087]: 2025-12-05 10:21:35.891 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:36.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:36 compute-0 nova_compute[257087]: 2025-12-05 10:21:36.363 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:37 compute-0 ceph-mon[74418]: pgmap v973: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:37.429Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:21:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:37.429Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:21:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:37.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:21:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:38.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:21:38 compute-0 ceph-mon[74418]: pgmap v974: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.470418) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930098470462, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 882, "num_deletes": 251, "total_data_size": 1521827, "memory_usage": 1552576, "flush_reason": "Manual Compaction"}
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930098487002, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1482359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28760, "largest_seqno": 29641, "table_properties": {"data_size": 1477812, "index_size": 2199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10204, "raw_average_key_size": 20, "raw_value_size": 1468682, "raw_average_value_size": 2891, "num_data_blocks": 93, "num_entries": 508, "num_filter_entries": 508, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930030, "oldest_key_time": 1764930030, "file_creation_time": 1764930098, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 16717 microseconds, and 5750 cpu microseconds.
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.487110) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1482359 bytes OK
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.487174) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.490358) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.490383) EVENT_LOG_v1 {"time_micros": 1764930098490375, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.490409) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1517538, prev total WAL file size 1517538, number of live WAL files 2.
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.491415) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1447KB)], [62(12MB)]
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930098491497, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 15110081, "oldest_snapshot_seqno": -1}
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5918 keys, 12911274 bytes, temperature: kUnknown
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930098651115, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12911274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12873001, "index_size": 22355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 153255, "raw_average_key_size": 25, "raw_value_size": 12767326, "raw_average_value_size": 2157, "num_data_blocks": 892, "num_entries": 5918, "num_filter_entries": 5918, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764930098, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.651485) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12911274 bytes
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.653557) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.6 rd, 80.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 13.0 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(18.9) write-amplify(8.7) OK, records in: 6438, records dropped: 520 output_compression: NoCompression
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.653579) EVENT_LOG_v1 {"time_micros": 1764930098653569, "job": 34, "event": "compaction_finished", "compaction_time_micros": 159714, "compaction_time_cpu_micros": 57765, "output_level": 6, "num_output_files": 1, "total_output_size": 12911274, "num_input_records": 6438, "num_output_records": 5918, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930098654083, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930098657406, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.491013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.657444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.657452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.657482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.657484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:38 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:21:38.657486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:39.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:21:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:40.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:40 compute-0 nova_compute[257087]: 2025-12-05 10:21:40.893 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:41 compute-0 ceph-mon[74418]: pgmap v975: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:21:41 compute-0 nova_compute[257087]: 2025-12-05 10:21:41.405 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:41.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:42.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:21:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:21:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:43 compute-0 ceph-mon[74418]: pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:21:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:43.724Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:21:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:21:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:43.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:21:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:21:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:44.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:45 compute-0 ceph-mon[74418]: pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:21:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:45] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:21:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:45] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:21:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:45.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:45 compute-0 nova_compute[257087]: 2025-12-05 10:21:45.959 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:46.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:46 compute-0 nova_compute[257087]: 2025-12-05 10:21:46.407 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:47 compute-0 ceph-mon[74418]: pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:47.430Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:21:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:47.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:48.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:49 compute-0 ceph-mon[74418]: pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:49 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2933028746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:21:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:49.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:21:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:50.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:51 compute-0 nova_compute[257087]: 2025-12-05 10:21:51.005 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:51 compute-0 ceph-mon[74418]: pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:21:51 compute-0 nova_compute[257087]: 2025-12-05 10:21:51.408 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:51.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:52.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:52 compute-0 podman[273297]: 2025-12-05 10:21:52.402736279 +0000 UTC m=+0.064308379 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:21:52 compute-0 podman[273298]: 2025-12-05 10:21:52.412112885 +0000 UTC m=+0.070601311 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 05 10:21:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:53 compute-0 ceph-mon[74418]: pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:21:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:53.725Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:21:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:53.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:21:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:54.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:54 compute-0 ceph-mon[74418]: pgmap v982: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:21:54 compute-0 sudo[273338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:21:54 compute-0 sudo[273338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:21:54 compute-0 sudo[273338]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:54 compute-0 podman[273362]: 2025-12-05 10:21:54.713965253 +0000 UTC m=+0.088278500 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 10:21:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:55] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:21:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:21:55] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:21:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:55.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:21:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:56.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:56 compute-0 nova_compute[257087]: 2025-12-05 10:21:56.086 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:56 compute-0 nova_compute[257087]: 2025-12-05 10:21:56.411 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:57 compute-0 ceph-mon[74418]: pgmap v983: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:21:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3313225640' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:21:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1096026325' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:21:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1096026325' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:21:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:21:57.432Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:21:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:21:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:21:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:21:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:21:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:21:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:21:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:21:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:21:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:21:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:21:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:21:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:21:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:21:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:21:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:21:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:21:58.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:21:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1002248299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:21:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:21:58 compute-0 ceph-mon[74418]: pgmap v984: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:21:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:21:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:21:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:21:59.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:21:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:22:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:00.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:01 compute-0 nova_compute[257087]: 2025-12-05 10:22:01.142 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:01 compute-0 ceph-mon[74418]: pgmap v985: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:22:01 compute-0 nova_compute[257087]: 2025-12-05 10:22:01.412 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:01 compute-0 nova_compute[257087]: 2025-12-05 10:22:01.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:01.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:22:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:02.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:02 compute-0 ceph-mon[74418]: pgmap v986: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:22:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:03.727Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:22:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:03.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 05 10:22:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:04.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:04 compute-0 nova_compute[257087]: 2025-12-05 10:22:04.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:04 compute-0 nova_compute[257087]: 2025-12-05 10:22:04.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:04 compute-0 nova_compute[257087]: 2025-12-05 10:22:04.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:05 compute-0 ceph-mon[74418]: pgmap v987: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 05 10:22:05 compute-0 nova_compute[257087]: 2025-12-05 10:22:05.167 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:05 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:22:05.166 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:22:05 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:22:05.181 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:22:05 compute-0 nova_compute[257087]: 2025-12-05 10:22:05.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:05 compute-0 nova_compute[257087]: 2025-12-05 10:22:05.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:05 compute-0 nova_compute[257087]: 2025-12-05 10:22:05.580 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:22:05 compute-0 nova_compute[257087]: 2025-12-05 10:22:05.580 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:22:05 compute-0 nova_compute[257087]: 2025-12-05 10:22:05.582 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:22:05 compute-0 nova_compute[257087]: 2025-12-05 10:22:05.582 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:22:05 compute-0 nova_compute[257087]: 2025-12-05 10:22:05.582 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:22:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:05] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:22:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:05] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:22:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:05.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:22:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:22:06 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/899948580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.085 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:22:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:06.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:06 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:22:06.183 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.235 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.331 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.333 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4628MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.333 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.333 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.422 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.452 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.453 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.480 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:22:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2482795592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:22:06 compute-0 ceph-mon[74418]: pgmap v988: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:22:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/899948580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:22:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:22:06 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121032369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.957 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.964 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.987 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.989 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:22:06 compute-0 nova_compute[257087]: 2025-12-05 10:22:06.989 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:22:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:07.433Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:22:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3167429627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:22:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2121032369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:22:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:07.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:22:07 compute-0 nova_compute[257087]: 2025-12-05 10:22:07.989 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:07 compute-0 nova_compute[257087]: 2025-12-05 10:22:07.990 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:07 compute-0 nova_compute[257087]: 2025-12-05 10:22:07.990 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:22:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:08.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:08 compute-0 ceph-mon[74418]: pgmap v989: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:22:08 compute-0 nova_compute[257087]: 2025-12-05 10:22:08.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:08 compute-0 nova_compute[257087]: 2025-12-05 10:22:08.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:22:08 compute-0 nova_compute[257087]: 2025-12-05 10:22:08.530 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:22:08 compute-0 nova_compute[257087]: 2025-12-05 10:22:08.545 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:22:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3603944823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:22:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3775786346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:22:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:09.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec 05 10:22:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:10.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:10 compute-0 nova_compute[257087]: 2025-12-05 10:22:10.540 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:10 compute-0 ceph-mon[74418]: pgmap v990: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec 05 10:22:11 compute-0 nova_compute[257087]: 2025-12-05 10:22:11.272 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:11 compute-0 nova_compute[257087]: 2025-12-05 10:22:11.424 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:11.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:22:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:12.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:22:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:22:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:13 compute-0 ceph-mon[74418]: pgmap v991: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:22:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:22:13 compute-0 sudo[273453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:22:13 compute-0 sudo[273453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:13 compute-0 sudo[273453]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:13.728Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:22:13 compute-0 sudo[273478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:22:13 compute-0 sudo[273478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:13.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 109 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 121 op/s
Dec 05 10:22:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:22:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:14.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:22:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:14 compute-0 sudo[273478]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 05 10:22:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 10:22:14 compute-0 sudo[273538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:22:14 compute-0 sudo[273538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:14 compute-0 sudo[273538]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:15] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:22:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:15] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:22:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:22:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:15.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:22:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 10:22:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 109 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 263 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Dec 05 10:22:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:16.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:16 compute-0 ceph-mon[74418]: pgmap v992: 353 pgs: 353 active+clean; 109 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 121 op/s
Dec 05 10:22:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 10:22:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:16 compute-0 nova_compute[257087]: 2025-12-05 10:22:16.425 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:22:16 compute-0 nova_compute[257087]: 2025-12-05 10:22:16.427 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:22:16 compute-0 nova_compute[257087]: 2025-12-05 10:22:16.428 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:22:16 compute-0 nova_compute[257087]: 2025-12-05 10:22:16.428 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:22:16 compute-0 nova_compute[257087]: 2025-12-05 10:22:16.434 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:16 compute-0 nova_compute[257087]: 2025-12-05 10:22:16.435 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:22:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 05 10:22:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 10:22:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 10:22:17 compute-0 ceph-mon[74418]: pgmap v993: 353 pgs: 353 active+clean; 109 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 263 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Dec 05 10:22:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 10:22:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:17.436Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:22:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 10:22:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 10:22:17 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:17.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 109 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 263 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Dec 05 10:22:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:18.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 05 10:22:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 10:22:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:22:18 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:22:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:22:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:22:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:22:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Dec 05 10:22:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:22:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:22:18 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:22:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:22:18 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:22:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:22:18 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:22:18 compute-0 sudo[273567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:22:18 compute-0 sudo[273567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:18 compute-0 sudo[273567]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:18 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:18 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:18 compute-0 ceph-mon[74418]: pgmap v994: 353 pgs: 353 active+clean; 109 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 263 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Dec 05 10:22:18 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 10:22:18 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:22:18 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:22:18 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:18 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:18 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:22:18 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:22:18 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:22:18 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec 05 10:22:18 compute-0 sudo[273592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:22:18 compute-0 sudo[273592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:19 compute-0 podman[273659]: 2025-12-05 10:22:19.329518718 +0000 UTC m=+0.049135876 container create f5fd77e0ba18606e869b7ff34d9e5ebff15d39b40b10c73c59d82e2644afe6be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 10:22:19 compute-0 systemd[1]: Started libpod-conmon-f5fd77e0ba18606e869b7ff34d9e5ebff15d39b40b10c73c59d82e2644afe6be.scope.
Dec 05 10:22:19 compute-0 podman[273659]: 2025-12-05 10:22:19.305414413 +0000 UTC m=+0.025031571 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:22:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:22:19 compute-0 podman[273659]: 2025-12-05 10:22:19.423516963 +0000 UTC m=+0.143134171 container init f5fd77e0ba18606e869b7ff34d9e5ebff15d39b40b10c73c59d82e2644afe6be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 10:22:19 compute-0 podman[273659]: 2025-12-05 10:22:19.432703562 +0000 UTC m=+0.152320720 container start f5fd77e0ba18606e869b7ff34d9e5ebff15d39b40b10c73c59d82e2644afe6be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feistel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:22:19 compute-0 podman[273659]: 2025-12-05 10:22:19.436984889 +0000 UTC m=+0.156602047 container attach f5fd77e0ba18606e869b7ff34d9e5ebff15d39b40b10c73c59d82e2644afe6be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:22:19 compute-0 goofy_feistel[273675]: 167 167
Dec 05 10:22:19 compute-0 systemd[1]: libpod-f5fd77e0ba18606e869b7ff34d9e5ebff15d39b40b10c73c59d82e2644afe6be.scope: Deactivated successfully.
Dec 05 10:22:19 compute-0 podman[273659]: 2025-12-05 10:22:19.442574471 +0000 UTC m=+0.162191629 container died f5fd77e0ba18606e869b7ff34d9e5ebff15d39b40b10c73c59d82e2644afe6be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feistel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdb211c9d61d62e0ab2f2a38170e8122be7ad56ed1b6476cf2640660487e44d9-merged.mount: Deactivated successfully.
Dec 05 10:22:19 compute-0 podman[273659]: 2025-12-05 10:22:19.487300737 +0000 UTC m=+0.206917895 container remove f5fd77e0ba18606e869b7ff34d9e5ebff15d39b40b10c73c59d82e2644afe6be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feistel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 05 10:22:19 compute-0 systemd[1]: libpod-conmon-f5fd77e0ba18606e869b7ff34d9e5ebff15d39b40b10c73c59d82e2644afe6be.scope: Deactivated successfully.
Dec 05 10:22:19 compute-0 podman[273700]: 2025-12-05 10:22:19.673774334 +0000 UTC m=+0.051028888 container create 59ff508ff5c1d3c39bb98996086106277fa7be16fc269f35bb1cdfc485262d60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_galois, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:22:19 compute-0 systemd[1]: Started libpod-conmon-59ff508ff5c1d3c39bb98996086106277fa7be16fc269f35bb1cdfc485262d60.scope.
Dec 05 10:22:19 compute-0 podman[273700]: 2025-12-05 10:22:19.651289673 +0000 UTC m=+0.028544247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:22:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e4ba6ec1421965b770a96dc31f33a1d9de8c6d33bb597ba386a0389eebd6e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e4ba6ec1421965b770a96dc31f33a1d9de8c6d33bb597ba386a0389eebd6e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e4ba6ec1421965b770a96dc31f33a1d9de8c6d33bb597ba386a0389eebd6e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e4ba6ec1421965b770a96dc31f33a1d9de8c6d33bb597ba386a0389eebd6e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e4ba6ec1421965b770a96dc31f33a1d9de8c6d33bb597ba386a0389eebd6e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:19 compute-0 podman[273700]: 2025-12-05 10:22:19.775402526 +0000 UTC m=+0.152657100 container init 59ff508ff5c1d3c39bb98996086106277fa7be16fc269f35bb1cdfc485262d60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:22:19 compute-0 podman[273700]: 2025-12-05 10:22:19.784611757 +0000 UTC m=+0.161866311 container start 59ff508ff5c1d3c39bb98996086106277fa7be16fc269f35bb1cdfc485262d60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_galois, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:22:19 compute-0 podman[273700]: 2025-12-05 10:22:19.789492228 +0000 UTC m=+0.166746792 container attach 59ff508ff5c1d3c39bb98996086106277fa7be16fc269f35bb1cdfc485262d60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_galois, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:22:19 compute-0 ceph-mon[74418]: pgmap v995: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Dec 05 10:22:19 compute-0 ceph-mon[74418]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec 05 10:22:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:19.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:20.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:20 compute-0 infallible_galois[273716]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:22:20 compute-0 infallible_galois[273716]: --> All data devices are unavailable
Dec 05 10:22:20 compute-0 systemd[1]: libpod-59ff508ff5c1d3c39bb98996086106277fa7be16fc269f35bb1cdfc485262d60.scope: Deactivated successfully.
Dec 05 10:22:20 compute-0 podman[273700]: 2025-12-05 10:22:20.184576687 +0000 UTC m=+0.561831241 container died 59ff508ff5c1d3c39bb98996086106277fa7be16fc269f35bb1cdfc485262d60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 10:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-39e4ba6ec1421965b770a96dc31f33a1d9de8c6d33bb597ba386a0389eebd6e1-merged.mount: Deactivated successfully.
Dec 05 10:22:20 compute-0 podman[273700]: 2025-12-05 10:22:20.230598467 +0000 UTC m=+0.607853021 container remove 59ff508ff5c1d3c39bb98996086106277fa7be16fc269f35bb1cdfc485262d60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 10:22:20 compute-0 systemd[1]: libpod-conmon-59ff508ff5c1d3c39bb98996086106277fa7be16fc269f35bb1cdfc485262d60.scope: Deactivated successfully.
Dec 05 10:22:20 compute-0 sudo[273592]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:20 compute-0 sudo[273745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:22:20 compute-0 sudo[273745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:20 compute-0 sudo[273745]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:20 compute-0 sudo[273770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:22:20 compute-0 sudo[273770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:22:20.579 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:22:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:22:20.580 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:22:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:22:20.580 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:22:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Dec 05 10:22:20 compute-0 podman[273838]: 2025-12-05 10:22:20.898678194 +0000 UTC m=+0.046698780 container create 4313cf0e6a0a88459e88d40f1bb75e494960b6360733eec413a71b2bfa8bb230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:22:20 compute-0 systemd[1]: Started libpod-conmon-4313cf0e6a0a88459e88d40f1bb75e494960b6360733eec413a71b2bfa8bb230.scope.
Dec 05 10:22:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:22:20 compute-0 podman[273838]: 2025-12-05 10:22:20.971940606 +0000 UTC m=+0.119961212 container init 4313cf0e6a0a88459e88d40f1bb75e494960b6360733eec413a71b2bfa8bb230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 10:22:20 compute-0 podman[273838]: 2025-12-05 10:22:20.880606853 +0000 UTC m=+0.028627459 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:22:20 compute-0 podman[273838]: 2025-12-05 10:22:20.979787828 +0000 UTC m=+0.127808414 container start 4313cf0e6a0a88459e88d40f1bb75e494960b6360733eec413a71b2bfa8bb230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 10:22:20 compute-0 podman[273838]: 2025-12-05 10:22:20.984089425 +0000 UTC m=+0.132110031 container attach 4313cf0e6a0a88459e88d40f1bb75e494960b6360733eec413a71b2bfa8bb230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_williams, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:22:20 compute-0 stupefied_williams[273855]: 167 167
Dec 05 10:22:20 compute-0 systemd[1]: libpod-4313cf0e6a0a88459e88d40f1bb75e494960b6360733eec413a71b2bfa8bb230.scope: Deactivated successfully.
Dec 05 10:22:20 compute-0 conmon[273855]: conmon 4313cf0e6a0a88459e88 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4313cf0e6a0a88459e88d40f1bb75e494960b6360733eec413a71b2bfa8bb230.scope/container/memory.events
Dec 05 10:22:20 compute-0 podman[273838]: 2025-12-05 10:22:20.988792724 +0000 UTC m=+0.136813300 container died 4313cf0e6a0a88459e88d40f1bb75e494960b6360733eec413a71b2bfa8bb230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_williams, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:22:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f091a199d99be10a9deef0c11983261dc93a4eb8dd332ad2ee465add3a8c296-merged.mount: Deactivated successfully.
Dec 05 10:22:21 compute-0 podman[273838]: 2025-12-05 10:22:21.026120068 +0000 UTC m=+0.174140654 container remove 4313cf0e6a0a88459e88d40f1bb75e494960b6360733eec413a71b2bfa8bb230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:22:21 compute-0 systemd[1]: libpod-conmon-4313cf0e6a0a88459e88d40f1bb75e494960b6360733eec413a71b2bfa8bb230.scope: Deactivated successfully.
Dec 05 10:22:21 compute-0 podman[273880]: 2025-12-05 10:22:21.178066907 +0000 UTC m=+0.026537202 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:22:21 compute-0 nova_compute[257087]: 2025-12-05 10:22:21.436 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:22:21 compute-0 nova_compute[257087]: 2025-12-05 10:22:21.438 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:21.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:21 compute-0 podman[273880]: 2025-12-05 10:22:21.955044794 +0000 UTC m=+0.803515109 container create 79aa60c52276b58f1538e1575285fbd878ac119a72c17e97227d8b0d4c8de7b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 10:22:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:22.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:22 compute-0 systemd[1]: Started libpod-conmon-79aa60c52276b58f1538e1575285fbd878ac119a72c17e97227d8b0d4c8de7b4.scope.
Dec 05 10:22:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55114d7a15750b86a6761e32df4e9a9438665692969e2fdbe50e1bb0d8e493e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55114d7a15750b86a6761e32df4e9a9438665692969e2fdbe50e1bb0d8e493e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55114d7a15750b86a6761e32df4e9a9438665692969e2fdbe50e1bb0d8e493e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55114d7a15750b86a6761e32df4e9a9438665692969e2fdbe50e1bb0d8e493e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:22 compute-0 ceph-mon[74418]: pgmap v996: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Dec 05 10:22:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Dec 05 10:22:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:23 compute-0 podman[273880]: 2025-12-05 10:22:23.376588908 +0000 UTC m=+2.225059203 container init 79aa60c52276b58f1538e1575285fbd878ac119a72c17e97227d8b0d4c8de7b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_roentgen, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 10:22:23 compute-0 podman[273880]: 2025-12-05 10:22:23.389430587 +0000 UTC m=+2.237900862 container start 79aa60c52276b58f1538e1575285fbd878ac119a72c17e97227d8b0d4c8de7b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_roentgen, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]: {
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:     "1": [
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:         {
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             "devices": [
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "/dev/loop3"
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             ],
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             "lv_name": "ceph_lv0",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             "lv_size": "21470642176",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             "name": "ceph_lv0",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             "tags": {
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.cluster_name": "ceph",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.crush_device_class": "",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.encrypted": "0",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.osd_id": "1",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.type": "block",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.vdo": "0",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:                 "ceph.with_tpm": "0"
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             },
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             "type": "block",
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:             "vg_name": "ceph_vg0"
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:         }
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]:     ]
Dec 05 10:22:23 compute-0 laughing_roentgen[273897]: }
Dec 05 10:22:23 compute-0 systemd[1]: libpod-79aa60c52276b58f1538e1575285fbd878ac119a72c17e97227d8b0d4c8de7b4.scope: Deactivated successfully.
Dec 05 10:22:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:23.730Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:22:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:23.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:24 compute-0 podman[273880]: 2025-12-05 10:22:24.040203373 +0000 UTC m=+2.888795242 container attach 79aa60c52276b58f1538e1575285fbd878ac119a72c17e97227d8b0d4c8de7b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:22:24 compute-0 podman[273880]: 2025-12-05 10:22:24.04303629 +0000 UTC m=+2.891506595 container died 79aa60c52276b58f1538e1575285fbd878ac119a72c17e97227d8b0d4c8de7b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 10:22:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:24.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-55114d7a15750b86a6761e32df4e9a9438665692969e2fdbe50e1bb0d8e493e5-merged.mount: Deactivated successfully.
Dec 05 10:22:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 121 KiB/s wr, 22 op/s
Dec 05 10:22:24 compute-0 ceph-mon[74418]: pgmap v997: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Dec 05 10:22:24 compute-0 podman[273880]: 2025-12-05 10:22:24.823428399 +0000 UTC m=+3.671898714 container remove 79aa60c52276b58f1538e1575285fbd878ac119a72c17e97227d8b0d4c8de7b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 10:22:24 compute-0 podman[273907]: 2025-12-05 10:22:24.825587667 +0000 UTC m=+2.360458681 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:22:24 compute-0 systemd[1]: libpod-conmon-79aa60c52276b58f1538e1575285fbd878ac119a72c17e97227d8b0d4c8de7b4.scope: Deactivated successfully.
Dec 05 10:22:24 compute-0 sudo[273770]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:24 compute-0 podman[273898]: 2025-12-05 10:22:24.937581921 +0000 UTC m=+2.503651144 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:22:24 compute-0 podman[273955]: 2025-12-05 10:22:24.975348438 +0000 UTC m=+0.100308157 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 10:22:24 compute-0 sudo[273970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:22:24 compute-0 sudo[273970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:24 compute-0 sudo[273970]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:25 compute-0 sudo[274013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:22:25 compute-0 sudo[274013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:25 compute-0 podman[274081]: 2025-12-05 10:22:25.516218328 +0000 UTC m=+0.053817743 container create 3aa680b7a1c2fa76b7690ed96e77546098487f6913a8c73b460c5a5ba10f5a15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kilby, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 10:22:25 compute-0 systemd[1]: Started libpod-conmon-3aa680b7a1c2fa76b7690ed96e77546098487f6913a8c73b460c5a5ba10f5a15.scope.
Dec 05 10:22:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:22:25 compute-0 podman[274081]: 2025-12-05 10:22:25.486400517 +0000 UTC m=+0.023999912 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:22:25 compute-0 podman[274081]: 2025-12-05 10:22:25.601533596 +0000 UTC m=+0.139133001 container init 3aa680b7a1c2fa76b7690ed96e77546098487f6913a8c73b460c5a5ba10f5a15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:22:25 compute-0 podman[274081]: 2025-12-05 10:22:25.610202832 +0000 UTC m=+0.147802207 container start 3aa680b7a1c2fa76b7690ed96e77546098487f6913a8c73b460c5a5ba10f5a15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 10:22:25 compute-0 kind_kilby[274098]: 167 167
Dec 05 10:22:25 compute-0 systemd[1]: libpod-3aa680b7a1c2fa76b7690ed96e77546098487f6913a8c73b460c5a5ba10f5a15.scope: Deactivated successfully.
Dec 05 10:22:25 compute-0 podman[274081]: 2025-12-05 10:22:25.626395982 +0000 UTC m=+0.163995357 container attach 3aa680b7a1c2fa76b7690ed96e77546098487f6913a8c73b460c5a5ba10f5a15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 10:22:25 compute-0 podman[274081]: 2025-12-05 10:22:25.627349888 +0000 UTC m=+0.164949263 container died 3aa680b7a1c2fa76b7690ed96e77546098487f6913a8c73b460c5a5ba10f5a15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kilby, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:22:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:25] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:22:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:25] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec 05 10:22:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a33adf8eea0de45b38f74fb9f2a4b396e9fbe6af7cd58d587b6a372c644c60a-merged.mount: Deactivated successfully.
Dec 05 10:22:25 compute-0 podman[274081]: 2025-12-05 10:22:25.671414156 +0000 UTC m=+0.209013531 container remove 3aa680b7a1c2fa76b7690ed96e77546098487f6913a8c73b460c5a5ba10f5a15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kilby, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:22:25 compute-0 systemd[1]: libpod-conmon-3aa680b7a1c2fa76b7690ed96e77546098487f6913a8c73b460c5a5ba10f5a15.scope: Deactivated successfully.
Dec 05 10:22:25 compute-0 podman[274122]: 2025-12-05 10:22:25.847840081 +0000 UTC m=+0.050673208 container create 6a1b50bcf4a48f4bec81cd0f8721ef925df2585990ef8a67a6374397babec8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_keller, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:22:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:25.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:25 compute-0 systemd[1]: Started libpod-conmon-6a1b50bcf4a48f4bec81cd0f8721ef925df2585990ef8a67a6374397babec8e6.scope.
Dec 05 10:22:25 compute-0 podman[274122]: 2025-12-05 10:22:25.824702112 +0000 UTC m=+0.027535269 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:22:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233acf050c93761be04c1d3819a3d079e28fcddd226a0538a7aa26c21573cac4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233acf050c93761be04c1d3819a3d079e28fcddd226a0538a7aa26c21573cac4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233acf050c93761be04c1d3819a3d079e28fcddd226a0538a7aa26c21573cac4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233acf050c93761be04c1d3819a3d079e28fcddd226a0538a7aa26c21573cac4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:22:25 compute-0 podman[274122]: 2025-12-05 10:22:25.948697452 +0000 UTC m=+0.151530599 container init 6a1b50bcf4a48f4bec81cd0f8721ef925df2585990ef8a67a6374397babec8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_keller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 10:22:25 compute-0 podman[274122]: 2025-12-05 10:22:25.958414966 +0000 UTC m=+0.161248093 container start 6a1b50bcf4a48f4bec81cd0f8721ef925df2585990ef8a67a6374397babec8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_keller, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:22:25 compute-0 podman[274122]: 2025-12-05 10:22:25.961613783 +0000 UTC m=+0.164446930 container attach 6a1b50bcf4a48f4bec81cd0f8721ef925df2585990ef8a67a6374397babec8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_keller, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:22:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:22:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:26.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:22:26 compute-0 nova_compute[257087]: 2025-12-05 10:22:26.439 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:26 compute-0 nova_compute[257087]: 2025-12-05 10:22:26.443 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:22:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 121 KiB/s wr, 22 op/s
Dec 05 10:22:26 compute-0 lvm[274214]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:22:26 compute-0 lvm[274214]: VG ceph_vg0 finished
Dec 05 10:22:26 compute-0 reverent_keller[274138]: {}
Dec 05 10:22:26 compute-0 systemd[1]: libpod-6a1b50bcf4a48f4bec81cd0f8721ef925df2585990ef8a67a6374397babec8e6.scope: Deactivated successfully.
Dec 05 10:22:26 compute-0 systemd[1]: libpod-6a1b50bcf4a48f4bec81cd0f8721ef925df2585990ef8a67a6374397babec8e6.scope: Consumed 1.179s CPU time.
Dec 05 10:22:26 compute-0 podman[274218]: 2025-12-05 10:22:26.768367239 +0000 UTC m=+0.030437698 container died 6a1b50bcf4a48f4bec81cd0f8721ef925df2585990ef8a67a6374397babec8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:22:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-233acf050c93761be04c1d3819a3d079e28fcddd226a0538a7aa26c21573cac4-merged.mount: Deactivated successfully.
Dec 05 10:22:26 compute-0 podman[274218]: 2025-12-05 10:22:26.811679386 +0000 UTC m=+0.073749835 container remove 6a1b50bcf4a48f4bec81cd0f8721ef925df2585990ef8a67a6374397babec8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:22:26 compute-0 systemd[1]: libpod-conmon-6a1b50bcf4a48f4bec81cd0f8721ef925df2585990ef8a67a6374397babec8e6.scope: Deactivated successfully.
Dec 05 10:22:26 compute-0 sudo[274013]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:22:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:27.437Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:22:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec 05 10:22:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:22:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:22:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:22:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:22:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:22:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:22:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:22:27
Dec 05 10:22:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:22:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:22:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', '.rgw.root', '.nfs', 'images', 'volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.data']
Dec 05 10:22:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:22:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:27.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007590822792537605 of space, bias 1.0, pg target 0.22772468377612817 quantized to 32 (current 32)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:22:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:28.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 122 KiB/s wr, 23 op/s
Dec 05 10:22:28 compute-0 ceph-mon[74418]: pgmap v998: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 121 KiB/s wr, 22 op/s
Dec 05 10:22:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:22:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:29.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:30.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:30 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:22:30 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:22:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=infra.usagestats t=2025-12-05T10:22:30.262797079Z level=info msg="Usage stats are ready to report"
Dec 05 10:22:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 13 KiB/s wr, 2 op/s
Dec 05 10:22:31 compute-0 nova_compute[257087]: 2025-12-05 10:22:31.441 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:22:31 compute-0 nova_compute[257087]: 2025-12-05 10:22:31.443 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:22:31 compute-0 nova_compute[257087]: 2025-12-05 10:22:31.443 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:22:31 compute-0 nova_compute[257087]: 2025-12-05 10:22:31.443 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:22:31 compute-0 nova_compute[257087]: 2025-12-05 10:22:31.453 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:31 compute-0 nova_compute[257087]: 2025-12-05 10:22:31.454 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:22:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:31.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:22:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:22:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 13 KiB/s wr, 2 op/s
Dec 05 10:22:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:33 compute-0 ceph-mon[74418]: pgmap v999: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 121 KiB/s wr, 22 op/s
Dec 05 10:22:33 compute-0 ceph-mon[74418]: pgmap v1000: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 122 KiB/s wr, 23 op/s
Dec 05 10:22:33 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:33 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:33 compute-0 sudo[274239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:22:33 compute-0 sudo[274239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:33 compute-0 sudo[274239]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:33.732Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:22:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:33.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:34.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:34 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:34 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:22:34 compute-0 ceph-mon[74418]: pgmap v1001: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 13 KiB/s wr, 2 op/s
Dec 05 10:22:34 compute-0 ceph-mon[74418]: pgmap v1002: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 13 KiB/s wr, 2 op/s
Dec 05 10:22:34 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:22:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 16 KiB/s wr, 2 op/s
Dec 05 10:22:34 compute-0 sudo[274266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:22:34 compute-0 sudo[274266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:34 compute-0 sudo[274266]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:35] "GET /metrics HTTP/1.1" 200 48568 "" "Prometheus/2.51.0"
Dec 05 10:22:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:35] "GET /metrics HTTP/1.1" 200 48568 "" "Prometheus/2.51.0"
Dec 05 10:22:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:35.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:36.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:36 compute-0 nova_compute[257087]: 2025-12-05 10:22:36.455 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:22:36 compute-0 nova_compute[257087]: 2025-12-05 10:22:36.458 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:36 compute-0 nova_compute[257087]: 2025-12-05 10:22:36.458 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:22:36 compute-0 nova_compute[257087]: 2025-12-05 10:22:36.458 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:22:36 compute-0 nova_compute[257087]: 2025-12-05 10:22:36.459 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:22:36 compute-0 nova_compute[257087]: 2025-12-05 10:22:36.461 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Dec 05 10:22:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:37.439Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:22:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:37.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:38.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 8.2 KiB/s wr, 29 op/s
Dec 05 10:22:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:39 compute-0 ceph-mon[74418]: pgmap v1003: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 16 KiB/s wr, 2 op/s
Dec 05 10:22:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:39.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:40.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.8 KiB/s wr, 29 op/s
Dec 05 10:22:40 compute-0 ceph-mon[74418]: pgmap v1004: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Dec 05 10:22:40 compute-0 ceph-mon[74418]: pgmap v1005: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 8.2 KiB/s wr, 29 op/s
Dec 05 10:22:41 compute-0 nova_compute[257087]: 2025-12-05 10:22:41.459 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:41 compute-0 nova_compute[257087]: 2025-12-05 10:22:41.461 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:41.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:42 compute-0 ceph-mon[74418]: pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.8 KiB/s wr, 29 op/s
Dec 05 10:22:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:42.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:22:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:22:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.8 KiB/s wr, 29 op/s
Dec 05 10:22:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:22:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:43.734Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:22:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:43.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:44.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.8 KiB/s wr, 29 op/s
Dec 05 10:22:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:45.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:45] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:22:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:45] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:22:46 compute-0 ceph-mon[74418]: pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.8 KiB/s wr, 29 op/s
Dec 05 10:22:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2501620433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:22:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:46.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:46 compute-0 nova_compute[257087]: 2025-12-05 10:22:46.463 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:22:46 compute-0 nova_compute[257087]: 2025-12-05 10:22:46.465 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:22:46 compute-0 nova_compute[257087]: 2025-12-05 10:22:46.465 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:22:46 compute-0 nova_compute[257087]: 2025-12-05 10:22:46.465 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:22:46 compute-0 nova_compute[257087]: 2025-12-05 10:22:46.502 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:46 compute-0 nova_compute[257087]: 2025-12-05 10:22:46.503 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:22:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 28 op/s
Dec 05 10:22:47 compute-0 ceph-mon[74418]: pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.8 KiB/s wr, 29 op/s
Dec 05 10:22:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:47.440Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:22:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:47.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:48.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Dec 05 10:22:48 compute-0 ceph-mon[74418]: pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 28 op/s
Dec 05 10:22:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:49.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:50.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:50 compute-0 ceph-mon[74418]: pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Dec 05 10:22:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Dec 05 10:22:51 compute-0 nova_compute[257087]: 2025-12-05 10:22:51.503 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:51 compute-0 ceph-mon[74418]: pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Dec 05 10:22:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:51.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:52.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Dec 05 10:22:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:53.735Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:22:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:53.735Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:22:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:53.736Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:22:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:53.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:54.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:54 compute-0 ceph-mon[74418]: pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Dec 05 10:22:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 1 op/s
Dec 05 10:22:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:54 compute-0 sudo[274311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:22:54 compute-0 sudo[274311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:22:54 compute-0 sudo[274311]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:55 compute-0 podman[274335]: 2025-12-05 10:22:55.058312764 +0000 UTC m=+0.095097835 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 05 10:22:55 compute-0 podman[274356]: 2025-12-05 10:22:55.180228138 +0000 UTC m=+0.086349838 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 10:22:55 compute-0 podman[274357]: 2025-12-05 10:22:55.191118254 +0000 UTC m=+0.095106595 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:22:55 compute-0 nova_compute[257087]: 2025-12-05 10:22:55.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:55] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:22:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:22:55] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:22:55 compute-0 ceph-mon[74418]: pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 1 op/s
Dec 05 10:22:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:55.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:56.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:56 compute-0 nova_compute[257087]: 2025-12-05 10:22:56.506 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:22:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:22:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:22:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1428479784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:22:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:22:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1428479784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:22:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:22:57.443Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:22:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:22:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:22:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:22:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:22:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:22:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:22:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:22:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:22:57 compute-0 nova_compute[257087]: 2025-12-05 10:22:57.725 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:57 compute-0 nova_compute[257087]: 2025-12-05 10:22:57.726 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 10:22:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:22:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:57.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:22:57 compute-0 ceph-mon[74418]: pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:22:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1428479784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:22:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1428479784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:22:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:22:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:22:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:22:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:22:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:22:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:22:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:22:58.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:22:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:22:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:22:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:22:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:22:59.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:00.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:00 compute-0 ceph-mon[74418]: pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:23:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:23:01 compute-0 nova_compute[257087]: 2025-12-05 10:23:01.508 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:23:01 compute-0 nova_compute[257087]: 2025-12-05 10:23:01.545 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:01.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:02.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:02 compute-0 ceph-mon[74418]: pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:23:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:23:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:03 compute-0 nova_compute[257087]: 2025-12-05 10:23:03.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:03 compute-0 nova_compute[257087]: 2025-12-05 10:23:03.530 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 10:23:03 compute-0 nova_compute[257087]: 2025-12-05 10:23:03.595 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 10:23:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:03.737Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:03.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:04.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:23:04 compute-0 ceph-mon[74418]: pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:23:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:05 compute-0 nova_compute[257087]: 2025-12-05 10:23:05.590 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:05 compute-0 nova_compute[257087]: 2025-12-05 10:23:05.591 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:05] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Dec 05 10:23:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:05] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Dec 05 10:23:05 compute-0 ceph-mon[74418]: pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:23:05 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:23:05.926 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:23:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:05.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:05 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:23:05.927 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:23:05 compute-0 nova_compute[257087]: 2025-12-05 10:23:05.927 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:06.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:06 compute-0 nova_compute[257087]: 2025-12-05 10:23:06.511 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:06 compute-0 nova_compute[257087]: 2025-12-05 10:23:06.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:06 compute-0 nova_compute[257087]: 2025-12-05 10:23:06.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:06 compute-0 nova_compute[257087]: 2025-12-05 10:23:06.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:23:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:07.445Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:07 compute-0 nova_compute[257087]: 2025-12-05 10:23:07.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:07 compute-0 nova_compute[257087]: 2025-12-05 10:23:07.626 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:23:07 compute-0 nova_compute[257087]: 2025-12-05 10:23:07.627 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:23:07 compute-0 nova_compute[257087]: 2025-12-05 10:23:07.628 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:23:07 compute-0 nova_compute[257087]: 2025-12-05 10:23:07.628 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:23:07 compute-0 nova_compute[257087]: 2025-12-05 10:23:07.629 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:23:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:07.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:23:08 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89422323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.159 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:23:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:08.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:08 compute-0 ceph-mon[74418]: pgmap v1019: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:23:08 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4066796718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.356 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.358 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4595MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.358 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.358 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.474 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.475 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.495 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing inventories for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.581 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Updating ProviderTree inventory for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.582 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Updating inventory in ProviderTree for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.597 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing aggregate associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.622 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing trait associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, traits: HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:23:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:23:08 compute-0 nova_compute[257087]: 2025-12-05 10:23:08.651 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:23:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:23:09 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837241036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:23:09 compute-0 nova_compute[257087]: 2025-12-05 10:23:09.135 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:23:09 compute-0 nova_compute[257087]: 2025-12-05 10:23:09.143 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:23:09 compute-0 nova_compute[257087]: 2025-12-05 10:23:09.190 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:23:09 compute-0 nova_compute[257087]: 2025-12-05 10:23:09.192 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:23:09 compute-0 nova_compute[257087]: 2025-12-05 10:23:09.193 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:23:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/89422323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:23:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/241812794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:23:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1926538412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:23:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1837241036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:23:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:09.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:10.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:10 compute-0 ceph-mon[74418]: pgmap v1020: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:23:10 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1642090975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:23:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:23:10 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:23:10.929 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:23:11 compute-0 nova_compute[257087]: 2025-12-05 10:23:11.193 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:11 compute-0 nova_compute[257087]: 2025-12-05 10:23:11.194 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:23:11 compute-0 nova_compute[257087]: 2025-12-05 10:23:11.194 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:23:11 compute-0 nova_compute[257087]: 2025-12-05 10:23:11.219 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:23:11 compute-0 nova_compute[257087]: 2025-12-05 10:23:11.219 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:11 compute-0 nova_compute[257087]: 2025-12-05 10:23:11.219 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:23:11 compute-0 nova_compute[257087]: 2025-12-05 10:23:11.512 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:23:11 compute-0 ceph-mon[74418]: pgmap v1021: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:23:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:12.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:23:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:23:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:23:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:23:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:13.739Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:13.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:14.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:14 compute-0 ceph-mon[74418]: pgmap v1022: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:23:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2235456461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:23:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:23:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:15 compute-0 sudo[274465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:23:15 compute-0 sudo[274465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:15 compute-0 sudo[274465]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:15] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec 05 10:23:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:15] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec 05 10:23:15 compute-0 ceph-mon[74418]: pgmap v1023: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:23:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:16.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:16 compute-0 nova_compute[257087]: 2025-12-05 10:23:16.515 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:23:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:17.446Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:17.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:18 compute-0 ceph-mon[74418]: pgmap v1024: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:23:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:18.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:23:18 compute-0 ceph-mgr[74711]: [dashboard INFO request] [192.168.122.100:51286] [POST] [200] [0.008s] [4.0B] [4dcc1368-e482-4cf6-8037-8cd1bc41b82e] /api/prometheus_receiver
Dec 05 10:23:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:19.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:20 compute-0 ceph-mon[74418]: pgmap v1025: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:23:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:20.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:23:20.580 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:23:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:23:20.581 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:23:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:23:20.581 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:23:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:23:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3557321040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:23:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3479525103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:23:21 compute-0 nova_compute[257087]: 2025-12-05 10:23:21.517 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:22.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:22 compute-0 ceph-mon[74418]: pgmap v1026: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:23:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:23:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:23.740Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:23:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:23.741Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:23 compute-0 ceph-mon[74418]: pgmap v1027: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:23:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:23.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:23:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:25 compute-0 podman[274502]: 2025-12-05 10:23:25.401798458 +0000 UTC m=+0.064797423 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 10:23:25 compute-0 podman[274500]: 2025-12-05 10:23:25.401839749 +0000 UTC m=+0.066296745 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:23:25 compute-0 podman[274501]: 2025-12-05 10:23:25.42835177 +0000 UTC m=+0.091085448 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:23:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:25] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec 05 10:23:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:25] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec 05 10:23:25 compute-0 ceph-mon[74418]: pgmap v1028: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:23:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:25.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:23:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.7 total, 600.0 interval
                                           Cumulative writes: 6712 writes, 30K keys, 6710 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 6712 writes, 6710 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1483 writes, 6469 keys, 1481 commit groups, 1.0 writes per commit group, ingest: 11.59 MB, 0.02 MB/s
                                           Interval WAL: 1483 writes, 1481 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     32.9      1.36              0.25        17    0.080       0      0       0.0       0.0
                                             L6      1/0   12.31 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.5     71.1     61.3      3.31              0.80        16    0.207     88K   8743       0.0       0.0
                                            Sum      1/0   12.31 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.5     50.4     53.1      4.67              1.05        33    0.141     88K   8743       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.9     60.6     59.7      1.02              0.23         8    0.128     25K   2546       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     71.1     61.3      3.31              0.80        16    0.207     88K   8743       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     49.8      0.90              0.25        16    0.056       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.46              0.00         1    0.463       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.7 total, 600.0 interval
                                           Flush(GB): cumulative 0.044, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.24 GB write, 0.10 MB/s write, 0.23 GB read, 0.10 MB/s read, 4.7 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585d4f19350#2 capacity: 304.00 MB usage: 20.10 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000278 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1061,19.41 MB,6.38502%) FilterBlock(34,259.11 KB,0.0832357%) IndexBlock(34,445.12 KB,0.142991%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 05 10:23:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:23:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:26.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:23:26 compute-0 nova_compute[257087]: 2025-12-05 10:23:26.518 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:23:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:27.446Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:23:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:27.447Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:23:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:23:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:23:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:23:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:23:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:23:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:23:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:23:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:23:27
Dec 05 10:23:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:23:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:23:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.nfs', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.control', '.rgw.root', 'vms', 'images']
Dec 05 10:23:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:23:27 compute-0 ceph-mon[74418]: pgmap v1029: 353 pgs: 353 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:23:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:27.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:23:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:28.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 05 10:23:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:28.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:23:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:29.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:30 compute-0 ceph-mon[74418]: pgmap v1030: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 05 10:23:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:30.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:23:31 compute-0 nova_compute[257087]: 2025-12-05 10:23:31.520 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:31 compute-0 nova_compute[257087]: 2025-12-05 10:23:31.522 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:31.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:32 compute-0 ceph-mon[74418]: pgmap v1031: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:23:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:32.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:23:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:33 compute-0 ceph-mon[74418]: pgmap v1032: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:23:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:33.742Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:33.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:34 compute-0 sudo[274570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:23:34 compute-0 sudo[274570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:34 compute-0 sudo[274570]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:34 compute-0 sudo[274595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:23:34 compute-0 sudo[274595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:34.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec 05 10:23:34 compute-0 sudo[274595]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:23:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:23:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:23:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:23:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 05 10:23:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:23:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:34 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:23:34 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:23:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:23:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:23:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:23:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:23:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:23:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:23:34 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:23:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:23:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:23:35 compute-0 sudo[274655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:23:35 compute-0 sudo[274655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:35 compute-0 sudo[274655]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:35 compute-0 sudo[274680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:23:35 compute-0 sudo[274680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:35 compute-0 sudo[274698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:23:35 compute-0 sudo[274698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:35 compute-0 sudo[274698]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:35 compute-0 podman[274768]: 2025-12-05 10:23:35.558410759 +0000 UTC m=+0.044981705 container create 9a8f0a9d11bde94331a7f228cf59d1c0fbf4c6f3a33479153dba3ada8aa82d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mayer, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:23:35 compute-0 systemd[1]: Started libpod-conmon-9a8f0a9d11bde94331a7f228cf59d1c0fbf4c6f3a33479153dba3ada8aa82d46.scope.
Dec 05 10:23:35 compute-0 podman[274768]: 2025-12-05 10:23:35.536617687 +0000 UTC m=+0.023188653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:23:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:23:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:35] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec 05 10:23:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:35] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec 05 10:23:35 compute-0 podman[274768]: 2025-12-05 10:23:35.666213882 +0000 UTC m=+0.152784858 container init 9a8f0a9d11bde94331a7f228cf59d1c0fbf4c6f3a33479153dba3ada8aa82d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Dec 05 10:23:35 compute-0 podman[274768]: 2025-12-05 10:23:35.677992953 +0000 UTC m=+0.164563899 container start 9a8f0a9d11bde94331a7f228cf59d1c0fbf4c6f3a33479153dba3ada8aa82d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mayer, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:23:35 compute-0 podman[274768]: 2025-12-05 10:23:35.682216818 +0000 UTC m=+0.168787774 container attach 9a8f0a9d11bde94331a7f228cf59d1c0fbf4c6f3a33479153dba3ada8aa82d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mayer, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 10:23:35 compute-0 kind_mayer[274784]: 167 167
Dec 05 10:23:35 compute-0 systemd[1]: libpod-9a8f0a9d11bde94331a7f228cf59d1c0fbf4c6f3a33479153dba3ada8aa82d46.scope: Deactivated successfully.
Dec 05 10:23:35 compute-0 podman[274768]: 2025-12-05 10:23:35.686764971 +0000 UTC m=+0.173335927 container died 9a8f0a9d11bde94331a7f228cf59d1c0fbf4c6f3a33479153dba3ada8aa82d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mayer, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:23:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-054a39b7a39bc98ea1fae16cbeb87b8bee845c35d22b5d879a19dcc94493d0ba-merged.mount: Deactivated successfully.
Dec 05 10:23:35 compute-0 podman[274768]: 2025-12-05 10:23:35.739016783 +0000 UTC m=+0.225587729 container remove 9a8f0a9d11bde94331a7f228cf59d1c0fbf4c6f3a33479153dba3ada8aa82d46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 10:23:35 compute-0 systemd[1]: libpod-conmon-9a8f0a9d11bde94331a7f228cf59d1c0fbf4c6f3a33479153dba3ada8aa82d46.scope: Deactivated successfully.
Dec 05 10:23:35 compute-0 podman[274808]: 2025-12-05 10:23:35.910556169 +0000 UTC m=+0.052278193 container create c5aa4fc9b54c5c796402df43d5202b00da5cad22e324a0f65602d94e5dc70d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:23:35 compute-0 ceph-mon[74418]: pgmap v1033: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec 05 10:23:35 compute-0 ceph-mon[74418]: pgmap v1034: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 05 10:23:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:23:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:23:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:23:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:23:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:23:35 compute-0 systemd[1]: Started libpod-conmon-c5aa4fc9b54c5c796402df43d5202b00da5cad22e324a0f65602d94e5dc70d9a.scope.
Dec 05 10:23:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:35.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:23:35 compute-0 podman[274808]: 2025-12-05 10:23:35.887265486 +0000 UTC m=+0.028987550 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904099165ebce0d79902f3d95d66b0ee98c76ed365a966dca6cc79fa2e94e5df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904099165ebce0d79902f3d95d66b0ee98c76ed365a966dca6cc79fa2e94e5df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904099165ebce0d79902f3d95d66b0ee98c76ed365a966dca6cc79fa2e94e5df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904099165ebce0d79902f3d95d66b0ee98c76ed365a966dca6cc79fa2e94e5df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904099165ebce0d79902f3d95d66b0ee98c76ed365a966dca6cc79fa2e94e5df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:36 compute-0 podman[274808]: 2025-12-05 10:23:36.001157234 +0000 UTC m=+0.142879288 container init c5aa4fc9b54c5c796402df43d5202b00da5cad22e324a0f65602d94e5dc70d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 10:23:36 compute-0 podman[274808]: 2025-12-05 10:23:36.008869104 +0000 UTC m=+0.150591138 container start c5aa4fc9b54c5c796402df43d5202b00da5cad22e324a0f65602d94e5dc70d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 10:23:36 compute-0 podman[274808]: 2025-12-05 10:23:36.011764612 +0000 UTC m=+0.153486646 container attach c5aa4fc9b54c5c796402df43d5202b00da5cad22e324a0f65602d94e5dc70d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 05 10:23:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:36 compute-0 adoring_volhard[274825]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:23:36 compute-0 adoring_volhard[274825]: --> All data devices are unavailable
Dec 05 10:23:36 compute-0 systemd[1]: libpod-c5aa4fc9b54c5c796402df43d5202b00da5cad22e324a0f65602d94e5dc70d9a.scope: Deactivated successfully.
Dec 05 10:23:36 compute-0 podman[274808]: 2025-12-05 10:23:36.391286138 +0000 UTC m=+0.533008192 container died c5aa4fc9b54c5c796402df43d5202b00da5cad22e324a0f65602d94e5dc70d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:23:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-904099165ebce0d79902f3d95d66b0ee98c76ed365a966dca6cc79fa2e94e5df-merged.mount: Deactivated successfully.
Dec 05 10:23:36 compute-0 podman[274808]: 2025-12-05 10:23:36.437026402 +0000 UTC m=+0.578748436 container remove c5aa4fc9b54c5c796402df43d5202b00da5cad22e324a0f65602d94e5dc70d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_volhard, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:23:36 compute-0 systemd[1]: libpod-conmon-c5aa4fc9b54c5c796402df43d5202b00da5cad22e324a0f65602d94e5dc70d9a.scope: Deactivated successfully.
Dec 05 10:23:36 compute-0 sudo[274680]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:36 compute-0 nova_compute[257087]: 2025-12-05 10:23:36.521 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:36 compute-0 sudo[274854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:23:36 compute-0 sudo[274854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:36 compute-0 sudo[274854]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:36 compute-0 sudo[274879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:23:36 compute-0 sudo[274879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 05 10:23:37 compute-0 podman[274945]: 2025-12-05 10:23:37.081982357 +0000 UTC m=+0.048441719 container create 7316e76ae28a304f21795104d64445c49b45d02bf93fd053c5b49625fcc8434d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 05 10:23:37 compute-0 systemd[1]: Started libpod-conmon-7316e76ae28a304f21795104d64445c49b45d02bf93fd053c5b49625fcc8434d.scope.
Dec 05 10:23:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:23:37 compute-0 podman[274945]: 2025-12-05 10:23:37.060962235 +0000 UTC m=+0.027421637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:23:37 compute-0 podman[274945]: 2025-12-05 10:23:37.16589703 +0000 UTC m=+0.132356422 container init 7316e76ae28a304f21795104d64445c49b45d02bf93fd053c5b49625fcc8434d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:23:37 compute-0 podman[274945]: 2025-12-05 10:23:37.1732543 +0000 UTC m=+0.139713672 container start 7316e76ae28a304f21795104d64445c49b45d02bf93fd053c5b49625fcc8434d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:23:37 compute-0 podman[274945]: 2025-12-05 10:23:37.176590131 +0000 UTC m=+0.143049503 container attach 7316e76ae28a304f21795104d64445c49b45d02bf93fd053c5b49625fcc8434d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:23:37 compute-0 quirky_sutherland[274962]: 167 167
Dec 05 10:23:37 compute-0 systemd[1]: libpod-7316e76ae28a304f21795104d64445c49b45d02bf93fd053c5b49625fcc8434d.scope: Deactivated successfully.
Dec 05 10:23:37 compute-0 conmon[274962]: conmon 7316e76ae28a304f2179 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7316e76ae28a304f21795104d64445c49b45d02bf93fd053c5b49625fcc8434d.scope/container/memory.events
Dec 05 10:23:37 compute-0 podman[274945]: 2025-12-05 10:23:37.180024324 +0000 UTC m=+0.146483696 container died 7316e76ae28a304f21795104d64445c49b45d02bf93fd053c5b49625fcc8434d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:23:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-61300943bee16caa340d5c428a18b3a23067926dc8d17cc2e4db6f2b7bef6710-merged.mount: Deactivated successfully.
Dec 05 10:23:37 compute-0 podman[274945]: 2025-12-05 10:23:37.218050989 +0000 UTC m=+0.184510361 container remove 7316e76ae28a304f21795104d64445c49b45d02bf93fd053c5b49625fcc8434d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:23:37 compute-0 systemd[1]: libpod-conmon-7316e76ae28a304f21795104d64445c49b45d02bf93fd053c5b49625fcc8434d.scope: Deactivated successfully.
Dec 05 10:23:37 compute-0 podman[274984]: 2025-12-05 10:23:37.399457584 +0000 UTC m=+0.056718165 container create 8631da8472a64689bb10130e8ecdc42155df361dbd6160d7ce54eb4f44050c09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_volhard, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 10:23:37 compute-0 systemd[1]: Started libpod-conmon-8631da8472a64689bb10130e8ecdc42155df361dbd6160d7ce54eb4f44050c09.scope.
Dec 05 10:23:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:37.448Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:37 compute-0 podman[274984]: 2025-12-05 10:23:37.376788637 +0000 UTC m=+0.034049228 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:23:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cabfca358d314e8b62592ba72f6d25ff14a6fba04e05c87102be4d146fad9c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cabfca358d314e8b62592ba72f6d25ff14a6fba04e05c87102be4d146fad9c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cabfca358d314e8b62592ba72f6d25ff14a6fba04e05c87102be4d146fad9c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cabfca358d314e8b62592ba72f6d25ff14a6fba04e05c87102be4d146fad9c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:37 compute-0 podman[274984]: 2025-12-05 10:23:37.489605276 +0000 UTC m=+0.146865897 container init 8631da8472a64689bb10130e8ecdc42155df361dbd6160d7ce54eb4f44050c09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:23:37 compute-0 podman[274984]: 2025-12-05 10:23:37.496664388 +0000 UTC m=+0.153924929 container start 8631da8472a64689bb10130e8ecdc42155df361dbd6160d7ce54eb4f44050c09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_volhard, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:23:37 compute-0 podman[274984]: 2025-12-05 10:23:37.506588948 +0000 UTC m=+0.163849489 container attach 8631da8472a64689bb10130e8ecdc42155df361dbd6160d7ce54eb4f44050c09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 05 10:23:37 compute-0 stoic_volhard[275001]: {
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:     "1": [
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:         {
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             "devices": [
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "/dev/loop3"
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             ],
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             "lv_name": "ceph_lv0",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             "lv_size": "21470642176",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             "name": "ceph_lv0",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             "tags": {
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.cluster_name": "ceph",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.crush_device_class": "",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.encrypted": "0",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.osd_id": "1",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.type": "block",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.vdo": "0",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:                 "ceph.with_tpm": "0"
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             },
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             "type": "block",
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:             "vg_name": "ceph_vg0"
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:         }
Dec 05 10:23:37 compute-0 stoic_volhard[275001]:     ]
Dec 05 10:23:37 compute-0 stoic_volhard[275001]: }
Dec 05 10:23:37 compute-0 systemd[1]: libpod-8631da8472a64689bb10130e8ecdc42155df361dbd6160d7ce54eb4f44050c09.scope: Deactivated successfully.
Dec 05 10:23:37 compute-0 podman[274984]: 2025-12-05 10:23:37.844366447 +0000 UTC m=+0.501627018 container died 8631da8472a64689bb10130e8ecdc42155df361dbd6160d7ce54eb4f44050c09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:23:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:37.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:38.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cabfca358d314e8b62592ba72f6d25ff14a6fba04e05c87102be4d146fad9c0-merged.mount: Deactivated successfully.
Dec 05 10:23:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 113 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.4 MiB/s wr, 59 op/s
Dec 05 10:23:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:38.847Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:23:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:38.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:39 compute-0 ceph-mon[74418]: pgmap v1035: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 05 10:23:39 compute-0 podman[274984]: 2025-12-05 10:23:39.17773982 +0000 UTC m=+1.835000401 container remove 8631da8472a64689bb10130e8ecdc42155df361dbd6160d7ce54eb4f44050c09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 10:23:39 compute-0 systemd[1]: libpod-conmon-8631da8472a64689bb10130e8ecdc42155df361dbd6160d7ce54eb4f44050c09.scope: Deactivated successfully.
Dec 05 10:23:39 compute-0 sudo[274879]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:39 compute-0 sudo[275024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:23:39 compute-0 sudo[275024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:39 compute-0 sudo[275024]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:39 compute-0 sudo[275049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:23:39 compute-0 sudo[275049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:39 compute-0 podman[275115]: 2025-12-05 10:23:39.831468424 +0000 UTC m=+0.038297633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:23:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:39.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:39 compute-0 podman[275115]: 2025-12-05 10:23:39.970687071 +0000 UTC m=+0.177516220 container create ec6456f35ba7756b7cb8d26535158b8ea97e4a88ad3899a586e3ae7a3b0e7f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 10:23:40 compute-0 systemd[1]: Started libpod-conmon-ec6456f35ba7756b7cb8d26535158b8ea97e4a88ad3899a586e3ae7a3b0e7f8e.scope.
Dec 05 10:23:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:23:40 compute-0 podman[275115]: 2025-12-05 10:23:40.159825857 +0000 UTC m=+0.366655056 container init ec6456f35ba7756b7cb8d26535158b8ea97e4a88ad3899a586e3ae7a3b0e7f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 10:23:40 compute-0 ceph-mon[74418]: pgmap v1036: 353 pgs: 353 active+clean; 113 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.4 MiB/s wr, 59 op/s
Dec 05 10:23:40 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1728807065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:23:40 compute-0 podman[275115]: 2025-12-05 10:23:40.168666807 +0000 UTC m=+0.375495926 container start ec6456f35ba7756b7cb8d26535158b8ea97e4a88ad3899a586e3ae7a3b0e7f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:23:40 compute-0 recursing_euclid[275131]: 167 167
Dec 05 10:23:40 compute-0 systemd[1]: libpod-ec6456f35ba7756b7cb8d26535158b8ea97e4a88ad3899a586e3ae7a3b0e7f8e.scope: Deactivated successfully.
Dec 05 10:23:40 compute-0 conmon[275131]: conmon ec6456f35ba7756b7cb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec6456f35ba7756b7cb8d26535158b8ea97e4a88ad3899a586e3ae7a3b0e7f8e.scope/container/memory.events
Dec 05 10:23:40 compute-0 podman[275115]: 2025-12-05 10:23:40.191495958 +0000 UTC m=+0.398325077 container attach ec6456f35ba7756b7cb8d26535158b8ea97e4a88ad3899a586e3ae7a3b0e7f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_euclid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:23:40 compute-0 podman[275115]: 2025-12-05 10:23:40.194099299 +0000 UTC m=+0.400928408 container died ec6456f35ba7756b7cb8d26535158b8ea97e4a88ad3899a586e3ae7a3b0e7f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_euclid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9df13e535ee8aa31b3637d237497b7c55c555c7ba74f23fc17cce3b0f6732a3a-merged.mount: Deactivated successfully.
Dec 05 10:23:40 compute-0 podman[275115]: 2025-12-05 10:23:40.241802966 +0000 UTC m=+0.448632085 container remove ec6456f35ba7756b7cb8d26535158b8ea97e4a88ad3899a586e3ae7a3b0e7f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_euclid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 10:23:40 compute-0 systemd[1]: libpod-conmon-ec6456f35ba7756b7cb8d26535158b8ea97e4a88ad3899a586e3ae7a3b0e7f8e.scope: Deactivated successfully.
Dec 05 10:23:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:23:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:40.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:23:40 compute-0 podman[275158]: 2025-12-05 10:23:40.413162128 +0000 UTC m=+0.048616263 container create bd90a11ae8ecec197324c4b8bbe87e84e64deffc7f63bdbe78507c51e74f8581 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:23:40 compute-0 systemd[1]: Started libpod-conmon-bd90a11ae8ecec197324c4b8bbe87e84e64deffc7f63bdbe78507c51e74f8581.scope.
Dec 05 10:23:40 compute-0 podman[275158]: 2025-12-05 10:23:40.39410078 +0000 UTC m=+0.029554925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:23:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c507c19adef77ef83c95616e07a0c0011c19d690cd0560399039ec5b04f194c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c507c19adef77ef83c95616e07a0c0011c19d690cd0560399039ec5b04f194c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c507c19adef77ef83c95616e07a0c0011c19d690cd0560399039ec5b04f194c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c507c19adef77ef83c95616e07a0c0011c19d690cd0560399039ec5b04f194c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:23:40 compute-0 podman[275158]: 2025-12-05 10:23:40.514104314 +0000 UTC m=+0.149558479 container init bd90a11ae8ecec197324c4b8bbe87e84e64deffc7f63bdbe78507c51e74f8581 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 10:23:40 compute-0 podman[275158]: 2025-12-05 10:23:40.522661537 +0000 UTC m=+0.158115662 container start bd90a11ae8ecec197324c4b8bbe87e84e64deffc7f63bdbe78507c51e74f8581 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 10:23:40 compute-0 podman[275158]: 2025-12-05 10:23:40.526784209 +0000 UTC m=+0.162238374 container attach bd90a11ae8ecec197324c4b8bbe87e84e64deffc7f63bdbe78507c51e74f8581 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:23:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 113 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.4 MiB/s wr, 59 op/s
Dec 05 10:23:41 compute-0 lvm[275250]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:23:41 compute-0 lvm[275250]: VG ceph_vg0 finished
Dec 05 10:23:41 compute-0 sharp_rubin[275176]: {}
Dec 05 10:23:41 compute-0 systemd[1]: libpod-bd90a11ae8ecec197324c4b8bbe87e84e64deffc7f63bdbe78507c51e74f8581.scope: Deactivated successfully.
Dec 05 10:23:41 compute-0 systemd[1]: libpod-bd90a11ae8ecec197324c4b8bbe87e84e64deffc7f63bdbe78507c51e74f8581.scope: Consumed 1.251s CPU time.
Dec 05 10:23:41 compute-0 podman[275158]: 2025-12-05 10:23:41.326032792 +0000 UTC m=+0.961486927 container died bd90a11ae8ecec197324c4b8bbe87e84e64deffc7f63bdbe78507c51e74f8581 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_rubin, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 10:23:41 compute-0 nova_compute[257087]: 2025-12-05 10:23:41.524 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:23:41 compute-0 ceph-mon[74418]: pgmap v1037: 353 pgs: 353 active+clean; 113 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.4 MiB/s wr, 59 op/s
Dec 05 10:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c507c19adef77ef83c95616e07a0c0011c19d690cd0560399039ec5b04f194c6-merged.mount: Deactivated successfully.
Dec 05 10:23:41 compute-0 podman[275158]: 2025-12-05 10:23:41.576801155 +0000 UTC m=+1.212255280 container remove bd90a11ae8ecec197324c4b8bbe87e84e64deffc7f63bdbe78507c51e74f8581 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_rubin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:23:41 compute-0 systemd[1]: libpod-conmon-bd90a11ae8ecec197324c4b8bbe87e84e64deffc7f63bdbe78507c51e74f8581.scope: Deactivated successfully.
Dec 05 10:23:41 compute-0 sudo[275049]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:23:41 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:23:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:23:41 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:23:41 compute-0 sudo[275266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:23:41 compute-0 sudo[275266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:41 compute-0 sudo[275266]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:41.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:42.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:23:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:23:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 113 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.4 MiB/s wr, 59 op/s
Dec 05 10:23:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:23:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:23:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:23:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:43.743Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:23:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:43.744Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:43.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:44.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:44 compute-0 ceph-mon[74418]: pgmap v1038: 353 pgs: 353 active+clean; 113 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.4 MiB/s wr, 59 op/s
Dec 05 10:23:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 407 KiB/s rd, 4.6 MiB/s wr, 108 op/s
Dec 05 10:23:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:45] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Dec 05 10:23:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:45] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Dec 05 10:23:45 compute-0 ceph-mon[74418]: pgmap v1039: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 407 KiB/s rd, 4.6 MiB/s wr, 108 op/s
Dec 05 10:23:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:45.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:46.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:46 compute-0 nova_compute[257087]: 2025-12-05 10:23:46.526 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 05 10:23:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:47.450Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:47.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:48.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:48 compute-0 ceph-mon[74418]: pgmap v1040: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 05 10:23:48 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/467100406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:23:48 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3694105084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:23:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Dec 05 10:23:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:48.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:49.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:50 compute-0 ceph-mon[74418]: pgmap v1041: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Dec 05 10:23:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:23:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:50.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:23:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 1.9 MiB/s wr, 42 op/s
Dec 05 10:23:51 compute-0 nova_compute[257087]: 2025-12-05 10:23:51.528 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:51 compute-0 nova_compute[257087]: 2025-12-05 10:23:51.533 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:51.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:52 compute-0 ceph-mon[74418]: pgmap v1042: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 1.9 MiB/s wr, 42 op/s
Dec 05 10:23:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:52.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 1.9 MiB/s wr, 42 op/s
Dec 05 10:23:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:53.745Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:23:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:53.746Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:53.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:54 compute-0 ceph-mon[74418]: pgmap v1043: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 1.9 MiB/s wr, 42 op/s
Dec 05 10:23:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:23:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:54.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:23:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 116 op/s
Dec 05 10:23:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:55 compute-0 sudo[275305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:23:55 compute-0 sudo[275305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:23:55 compute-0 sudo[275305]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:55] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Dec 05 10:23:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:23:55] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Dec 05 10:23:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:55.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:56 compute-0 ceph-mon[74418]: pgmap v1044: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 116 op/s
Dec 05 10:23:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:56.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:56 compute-0 podman[275333]: 2025-12-05 10:23:56.42865649 +0000 UTC m=+0.079435841 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 05 10:23:56 compute-0 podman[275331]: 2025-12-05 10:23:56.442203149 +0000 UTC m=+0.093090443 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 05 10:23:56 compute-0 podman[275332]: 2025-12-05 10:23:56.523702246 +0000 UTC m=+0.174325863 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 05 10:23:56 compute-0 nova_compute[257087]: 2025-12-05 10:23:56.529 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:56 compute-0 nova_compute[257087]: 2025-12-05 10:23:56.534 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Dec 05 10:23:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/112248783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:23:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/112248783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:23:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:57.452Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:23:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:23:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:23:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:23:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:23:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:23:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:23:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:23:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:57.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:23:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:23:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:23:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:23:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:23:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:23:58.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:23:58 compute-0 ceph-mon[74418]: pgmap v1045: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Dec 05 10:23:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:23:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 75 op/s
Dec 05 10:23:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:23:58.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:23:59 compute-0 ceph-mon[74418]: pgmap v1046: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 75 op/s
Dec 05 10:23:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:23:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:23:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:23:59.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:00.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 05 10:24:01 compute-0 nova_compute[257087]: 2025-12-05 10:24:01.532 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:01 compute-0 nova_compute[257087]: 2025-12-05 10:24:01.535 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:01.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:01 compute-0 ceph-mon[74418]: pgmap v1047: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 05 10:24:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:02.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 05 10:24:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:03 compute-0 nova_compute[257087]: 2025-12-05 10:24:03.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:03.747Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:03.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:04 compute-0 ceph-mon[74418]: pgmap v1048: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 05 10:24:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 192 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Dec 05 10:24:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:05] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Dec 05 10:24:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:05] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Dec 05 10:24:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:05.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:06 compute-0 ceph-mon[74418]: pgmap v1049: 353 pgs: 353 active+clean; 192 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Dec 05 10:24:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:06.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:06 compute-0 nova_compute[257087]: 2025-12-05 10:24:06.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:06 compute-0 nova_compute[257087]: 2025-12-05 10:24:06.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:06 compute-0 nova_compute[257087]: 2025-12-05 10:24:06.533 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:06 compute-0 nova_compute[257087]: 2025-12-05 10:24:06.535 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 192 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Dec 05 10:24:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:07.455Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:07 compute-0 nova_compute[257087]: 2025-12-05 10:24:07.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:07.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:08 compute-0 ceph-mon[74418]: pgmap v1050: 353 pgs: 353 active+clean; 192 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Dec 05 10:24:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:24:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:08.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:24:08 compute-0 nova_compute[257087]: 2025-12-05 10:24:08.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:08 compute-0 nova_compute[257087]: 2025-12-05 10:24:08.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec 05 10:24:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:08.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:09 compute-0 nova_compute[257087]: 2025-12-05 10:24:09.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:09.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:10.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 05 10:24:11 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3951474038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:24:11 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4122104346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:24:11 compute-0 nova_compute[257087]: 2025-12-05 10:24:11.536 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:12.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:24:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:12.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:24:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 05 10:24:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:24:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:24:13 compute-0 ceph-mon[74418]: pgmap v1051: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec 05 10:24:13 compute-0 ceph-mon[74418]: pgmap v1052: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 05 10:24:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:24:13 compute-0 nova_compute[257087]: 2025-12-05 10:24:13.269 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:24:13 compute-0 nova_compute[257087]: 2025-12-05 10:24:13.270 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:24:13 compute-0 nova_compute[257087]: 2025-12-05 10:24:13.270 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:24:13 compute-0 nova_compute[257087]: 2025-12-05 10:24:13.270 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:24:13 compute-0 nova_compute[257087]: 2025-12-05 10:24:13.271 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:24:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:13.749Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:24:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639209125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:24:13 compute-0 nova_compute[257087]: 2025-12-05 10:24:13.797 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:24:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:14.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:14 compute-0 nova_compute[257087]: 2025-12-05 10:24:14.051 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:24:14 compute-0 nova_compute[257087]: 2025-12-05 10:24:14.053 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4580MB free_disk=59.89735412597656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:24:14 compute-0 nova_compute[257087]: 2025-12-05 10:24:14.053 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:24:14 compute-0 nova_compute[257087]: 2025-12-05 10:24:14.053 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:24:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:14.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:14 compute-0 ceph-mon[74418]: pgmap v1053: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 05 10:24:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3639209125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:24:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1208202907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:24:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3911176836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:24:14 compute-0 nova_compute[257087]: 2025-12-05 10:24:14.789 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:24:14 compute-0 nova_compute[257087]: 2025-12-05 10:24:14.790 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:24:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 05 10:24:14 compute-0 nova_compute[257087]: 2025-12-05 10:24:14.808 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:24:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:24:15 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/167478413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:24:15 compute-0 nova_compute[257087]: 2025-12-05 10:24:15.249 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:24:15 compute-0 nova_compute[257087]: 2025-12-05 10:24:15.259 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:24:15 compute-0 sudo[275459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:24:15 compute-0 sudo[275459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:15 compute-0 sudo[275459]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:15 compute-0 nova_compute[257087]: 2025-12-05 10:24:15.388 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:24:15 compute-0 nova_compute[257087]: 2025-12-05 10:24:15.392 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:24:15 compute-0 nova_compute[257087]: 2025-12-05 10:24:15.393 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:24:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:15] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec 05 10:24:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:15] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec 05 10:24:15 compute-0 ceph-mon[74418]: pgmap v1054: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 05 10:24:15 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/167478413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:24:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:16.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:16.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:16 compute-0 nova_compute[257087]: 2025-12-05 10:24:16.394 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:16 compute-0 nova_compute[257087]: 2025-12-05 10:24:16.539 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:24:16 compute-0 nova_compute[257087]: 2025-12-05 10:24:16.542 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:24:16 compute-0 nova_compute[257087]: 2025-12-05 10:24:16.542 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:24:16 compute-0 nova_compute[257087]: 2025-12-05 10:24:16.542 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:24:16 compute-0 nova_compute[257087]: 2025-12-05 10:24:16.630 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:16 compute-0 nova_compute[257087]: 2025-12-05 10:24:16.631 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:24:16 compute-0 nova_compute[257087]: 2025-12-05 10:24:16.669 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:16 compute-0 nova_compute[257087]: 2025-12-05 10:24:16.669 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:24:16 compute-0 nova_compute[257087]: 2025-12-05 10:24:16.669 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:24:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 105 KiB/s wr, 16 op/s
Dec 05 10:24:17 compute-0 nova_compute[257087]: 2025-12-05 10:24:17.298 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:24:17 compute-0 nova_compute[257087]: 2025-12-05 10:24:17.299 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:17 compute-0 nova_compute[257087]: 2025-12-05 10:24:17.299 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:24:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:17.456Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:24:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:17.457Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec 05 10:24:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:18.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:18 compute-0 ceph-mon[74418]: pgmap v1055: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 105 KiB/s wr, 16 op/s
Dec 05 10:24:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:18.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 105 KiB/s wr, 16 op/s
Dec 05 10:24:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:18.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:19 compute-0 ceph-mon[74418]: pgmap v1056: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 105 KiB/s wr, 16 op/s
Dec 05 10:24:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:20.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:20.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:24:20.581 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:24:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:24:20.582 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:24:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:24:20.582 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:24:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Dec 05 10:24:21 compute-0 nova_compute[257087]: 2025-12-05 10:24:21.632 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:21 compute-0 nova_compute[257087]: 2025-12-05 10:24:21.636 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:21 compute-0 ceph-mon[74418]: pgmap v1057: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Dec 05 10:24:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:22.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:22.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Dec 05 10:24:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:23.751Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:24.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:24:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:24.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:24:24 compute-0 ceph-mon[74418]: pgmap v1058: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Dec 05 10:24:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 16 KiB/s wr, 2 op/s
Dec 05 10:24:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:25] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec 05 10:24:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:25] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec 05 10:24:25 compute-0 ceph-mon[74418]: pgmap v1059: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 16 KiB/s wr, 2 op/s
Dec 05 10:24:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:26.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:24:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:26.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:24:26 compute-0 nova_compute[257087]: 2025-12-05 10:24:26.637 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:24:26 compute-0 nova_compute[257087]: 2025-12-05 10:24:26.639 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:24:26 compute-0 nova_compute[257087]: 2025-12-05 10:24:26.640 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:24:26 compute-0 nova_compute[257087]: 2025-12-05 10:24:26.640 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:24:26 compute-0 nova_compute[257087]: 2025-12-05 10:24:26.673 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:26 compute-0 nova_compute[257087]: 2025-12-05 10:24:26.674 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:24:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Dec 05 10:24:27 compute-0 podman[275496]: 2025-12-05 10:24:27.45171206 +0000 UTC m=+0.099158870 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 05 10:24:27 compute-0 podman[275498]: 2025-12-05 10:24:27.451251336 +0000 UTC m=+0.092410225 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:24:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:27.459Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:27 compute-0 podman[275497]: 2025-12-05 10:24:27.488991713 +0000 UTC m=+0.136464723 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 10:24:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:24:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:24:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:24:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:24:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:24:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:24:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:24:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:24:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:24:27
Dec 05 10:24:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:24:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:24:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['cephfs.cephfs.data', '.nfs', 'vms', 'default.rgw.control', 'volumes', 'images', 'default.rgw.meta', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log']
Dec 05 10:24:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:24:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:28.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001518291739923162 of space, bias 1.0, pg target 0.4554875219769486 quantized to 32 (current 32)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:24:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:28.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:28 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:24:28.363 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:24:28 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:24:28.364 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:24:28 compute-0 nova_compute[257087]: 2025-12-05 10:24:28.365 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:28 compute-0 ceph-mon[74418]: pgmap v1060: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Dec 05 10:24:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 9.2 KiB/s wr, 3 op/s
Dec 05 10:24:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:28.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:24:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:28.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:24:29 compute-0 ceph-mon[74418]: pgmap v1061: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 9.2 KiB/s wr, 3 op/s
Dec 05 10:24:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:30.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:30.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 9.2 KiB/s wr, 3 op/s
Dec 05 10:24:31 compute-0 ceph-mon[74418]: pgmap v1062: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 9.2 KiB/s wr, 3 op/s
Dec 05 10:24:31 compute-0 nova_compute[257087]: 2025-12-05 10:24:31.676 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:31 compute-0 nova_compute[257087]: 2025-12-05 10:24:31.678 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:24:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:32.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:32.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:32 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:24:32.367 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:24:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 9.2 KiB/s wr, 3 op/s
Dec 05 10:24:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:33.752Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:34 compute-0 ceph-mon[74418]: pgmap v1063: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 9.2 KiB/s wr, 3 op/s
Dec 05 10:24:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:24:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:34.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:24:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 13 KiB/s wr, 4 op/s
Dec 05 10:24:35 compute-0 sudo[275570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:24:35 compute-0 sudo[275570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:35 compute-0 sudo[275570]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:35] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec 05 10:24:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:35] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec 05 10:24:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:36.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:36 compute-0 ceph-mon[74418]: pgmap v1064: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 13 KiB/s wr, 4 op/s
Dec 05 10:24:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:36.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:36 compute-0 nova_compute[257087]: 2025-12-05 10:24:36.678 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:24:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 12 KiB/s wr, 3 op/s
Dec 05 10:24:37 compute-0 ceph-mon[74418]: pgmap v1065: 353 pgs: 353 active+clean; 200 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 12 KiB/s wr, 3 op/s
Dec 05 10:24:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:37.460Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:38.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3008700282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:24:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:38.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 15 KiB/s wr, 31 op/s
Dec 05 10:24:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:38.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:39 compute-0 ceph-mon[74418]: pgmap v1066: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 15 KiB/s wr, 31 op/s
Dec 05 10:24:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:40.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:40.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Dec 05 10:24:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:41 compute-0 nova_compute[257087]: 2025-12-05 10:24:41.682 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:24:41 compute-0 nova_compute[257087]: 2025-12-05 10:24:41.686 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:24:41 compute-0 sudo[275601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:24:41 compute-0 sudo[275601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:42 compute-0 sudo[275601]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:42 compute-0 ceph-mon[74418]: pgmap v1067: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Dec 05 10:24:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4061618329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:24:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:42.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:42 compute-0 sudo[275626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 10:24:42 compute-0 sudo[275626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:42.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:24:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:24:42 compute-0 podman[275724]: 2025-12-05 10:24:42.739998894 +0000 UTC m=+0.073672224 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:24:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Dec 05 10:24:42 compute-0 podman[275724]: 2025-12-05 10:24:42.862679162 +0000 UTC m=+0.196352492 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:24:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:24:43 compute-0 podman[275844]: 2025-12-05 10:24:43.376575042 +0000 UTC m=+0.068442823 container exec 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:24:43 compute-0 podman[275844]: 2025-12-05 10:24:43.387674424 +0000 UTC m=+0.079542205 container exec_died 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:24:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:43.752Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:43 compute-0 podman[275936]: 2025-12-05 10:24:43.822422581 +0000 UTC m=+0.089817245 container exec 861f6a1b65dda022baecf3a1d543dbc6380dd0161a45bd75168d782fe13058a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 05 10:24:43 compute-0 podman[275936]: 2025-12-05 10:24:43.835934498 +0000 UTC m=+0.103329182 container exec_died 861f6a1b65dda022baecf3a1d543dbc6380dd0161a45bd75168d782fe13058a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:24:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:44.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:44 compute-0 ceph-mon[74418]: pgmap v1068: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Dec 05 10:24:44 compute-0 podman[276001]: 2025-12-05 10:24:44.098640645 +0000 UTC m=+0.063874169 container exec d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 10:24:44 compute-0 podman[276001]: 2025-12-05 10:24:44.115624367 +0000 UTC m=+0.080857911 container exec_died d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 10:24:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:44.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:44 compute-0 podman[276068]: 2025-12-05 10:24:44.397849255 +0000 UTC m=+0.072725229 container exec f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.tags=Ceph keepalived, release=1793, vcs-type=git, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc.)
Dec 05 10:24:44 compute-0 podman[276068]: 2025-12-05 10:24:44.414878428 +0000 UTC m=+0.089754342 container exec_died f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, distribution-scope=public, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=)
Dec 05 10:24:44 compute-0 podman[276136]: 2025-12-05 10:24:44.661190329 +0000 UTC m=+0.054127474 container exec a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:24:44 compute-0 podman[276136]: 2025-12-05 10:24:44.728783028 +0000 UTC m=+0.121720163 container exec_died a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:24:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.6 KiB/s wr, 56 op/s
Dec 05 10:24:45 compute-0 podman[276208]: 2025-12-05 10:24:45.301484047 +0000 UTC m=+0.067059775 container exec 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 10:24:45 compute-0 podman[276208]: 2025-12-05 10:24:45.57876033 +0000 UTC m=+0.344336068 container exec_died 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 10:24:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:45] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:24:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:45] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:24:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:46 compute-0 podman[276314]: 2025-12-05 10:24:46.01358847 +0000 UTC m=+0.055442259 container exec 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:24:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:46.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:46 compute-0 ceph-mon[74418]: pgmap v1069: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.6 KiB/s wr, 56 op/s
Dec 05 10:24:46 compute-0 podman[276314]: 2025-12-05 10:24:46.111813992 +0000 UTC m=+0.153667781 container exec_died 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:24:46 compute-0 sudo[275626]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:24:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:46.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:46 compute-0 nova_compute[257087]: 2025-12-05 10:24:46.686 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:24:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Dec 05 10:24:47 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:24:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:47.462Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:47 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:47 compute-0 sudo[276358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:24:47 compute-0 sudo[276358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:47 compute-0 sudo[276358]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:47 compute-0 sudo[276383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:24:47 compute-0 sudo[276383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:48 compute-0 ceph-mon[74418]: pgmap v1070: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Dec 05 10:24:48 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:48 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:48.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:48 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:24:48.057 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:c0:2d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-4f2d4122-dc00-4e37-87bd-412266af93b7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f2d4122-dc00-4e37-87bd-412266af93b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '838b1c7df82149408a85854af5a04909', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7409254-b4ed-4247-94a7-4e39fd02b6b5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8be4d49f-442e-4670-a904-b6cb3110989c) old=Port_Binding(mac=['fa:16:3e:98:c0:2d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4f2d4122-dc00-4e37-87bd-412266af93b7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f2d4122-dc00-4e37-87bd-412266af93b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '838b1c7df82149408a85854af5a04909', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:24:48 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:24:48.059 165250 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8be4d49f-442e-4670-a904-b6cb3110989c in datapath 4f2d4122-dc00-4e37-87bd-412266af93b7 updated
Dec 05 10:24:48 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:24:48.060 165250 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4f2d4122-dc00-4e37-87bd-412266af93b7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:24:48 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:24:48.066 268908 DEBUG oslo.privsep.daemon [-] privsep: reply[98958a24-17d2-4e64-886c-0dff587c43f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:24:48 compute-0 sudo[276383]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:24:48 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:24:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:24:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:24:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.5 KiB/s wr, 59 op/s
Dec 05 10:24:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:24:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:24:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:24:48 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:24:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:24:48 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:24:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:24:48 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:24:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:48.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:48 compute-0 sudo[276440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:24:48 compute-0 sudo[276440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:48 compute-0 sudo[276440]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:48 compute-0 sudo[276465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:24:48 compute-0 sudo[276465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:48.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:48 compute-0 podman[276532]: 2025-12-05 10:24:48.926658247 +0000 UTC m=+0.045084827 container create 8a3b1497f72b7e2f210141e1e2204c7e8404b8d761ecf01149493d2cac946c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:24:48 compute-0 systemd[1]: Started libpod-conmon-8a3b1497f72b7e2f210141e1e2204c7e8404b8d761ecf01149493d2cac946c31.scope.
Dec 05 10:24:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:24:49 compute-0 podman[276532]: 2025-12-05 10:24:48.906970831 +0000 UTC m=+0.025397441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:24:49 compute-0 podman[276532]: 2025-12-05 10:24:49.009955073 +0000 UTC m=+0.128381683 container init 8a3b1497f72b7e2f210141e1e2204c7e8404b8d761ecf01149493d2cac946c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 10:24:49 compute-0 podman[276532]: 2025-12-05 10:24:49.019793781 +0000 UTC m=+0.138220371 container start 8a3b1497f72b7e2f210141e1e2204c7e8404b8d761ecf01149493d2cac946c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 10:24:49 compute-0 podman[276532]: 2025-12-05 10:24:49.023535733 +0000 UTC m=+0.141962343 container attach 8a3b1497f72b7e2f210141e1e2204c7e8404b8d761ecf01149493d2cac946c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:24:49 compute-0 vigorous_wilbur[276549]: 167 167
Dec 05 10:24:49 compute-0 systemd[1]: libpod-8a3b1497f72b7e2f210141e1e2204c7e8404b8d761ecf01149493d2cac946c31.scope: Deactivated successfully.
Dec 05 10:24:49 compute-0 conmon[276549]: conmon 8a3b1497f72b7e2f2101 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a3b1497f72b7e2f210141e1e2204c7e8404b8d761ecf01149493d2cac946c31.scope/container/memory.events
Dec 05 10:24:49 compute-0 podman[276532]: 2025-12-05 10:24:49.028647452 +0000 UTC m=+0.147074052 container died 8a3b1497f72b7e2f210141e1e2204c7e8404b8d761ecf01149493d2cac946c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 05 10:24:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:24:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:24:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:24:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:24:49 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:24:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-44838b79deeb2eb0f16746b0654277bf9630616ee89a961a160caabe5237b401-merged.mount: Deactivated successfully.
Dec 05 10:24:49 compute-0 podman[276532]: 2025-12-05 10:24:49.082412585 +0000 UTC m=+0.200839175 container remove 8a3b1497f72b7e2f210141e1e2204c7e8404b8d761ecf01149493d2cac946c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wilbur, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 10:24:49 compute-0 systemd[1]: libpod-conmon-8a3b1497f72b7e2f210141e1e2204c7e8404b8d761ecf01149493d2cac946c31.scope: Deactivated successfully.
Dec 05 10:24:49 compute-0 podman[276571]: 2025-12-05 10:24:49.263800959 +0000 UTC m=+0.046697201 container create 39047b6212823a89b1dbe439fc7a2c59b4b517bd1df7553803738a3057021b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 10:24:49 compute-0 systemd[1]: Started libpod-conmon-39047b6212823a89b1dbe439fc7a2c59b4b517bd1df7553803738a3057021b10.scope.
Dec 05 10:24:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b749013b8a780a4eb3e13120885f9a8d539c40fb66747959703021531daa2d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:49 compute-0 podman[276571]: 2025-12-05 10:24:49.244274598 +0000 UTC m=+0.027170870 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b749013b8a780a4eb3e13120885f9a8d539c40fb66747959703021531daa2d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b749013b8a780a4eb3e13120885f9a8d539c40fb66747959703021531daa2d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b749013b8a780a4eb3e13120885f9a8d539c40fb66747959703021531daa2d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b749013b8a780a4eb3e13120885f9a8d539c40fb66747959703021531daa2d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:49 compute-0 podman[276571]: 2025-12-05 10:24:49.359990846 +0000 UTC m=+0.142887128 container init 39047b6212823a89b1dbe439fc7a2c59b4b517bd1df7553803738a3057021b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 10:24:49 compute-0 podman[276571]: 2025-12-05 10:24:49.368752414 +0000 UTC m=+0.151648656 container start 39047b6212823a89b1dbe439fc7a2c59b4b517bd1df7553803738a3057021b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_solomon, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:24:49 compute-0 podman[276571]: 2025-12-05 10:24:49.372380323 +0000 UTC m=+0.155276595 container attach 39047b6212823a89b1dbe439fc7a2c59b4b517bd1df7553803738a3057021b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_solomon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 10:24:49 compute-0 thirsty_solomon[276587]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:24:49 compute-0 thirsty_solomon[276587]: --> All data devices are unavailable
Dec 05 10:24:49 compute-0 systemd[1]: libpod-39047b6212823a89b1dbe439fc7a2c59b4b517bd1df7553803738a3057021b10.scope: Deactivated successfully.
Dec 05 10:24:49 compute-0 podman[276571]: 2025-12-05 10:24:49.78028758 +0000 UTC m=+0.563183832 container died 39047b6212823a89b1dbe439fc7a2c59b4b517bd1df7553803738a3057021b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 05 10:24:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b749013b8a780a4eb3e13120885f9a8d539c40fb66747959703021531daa2d0-merged.mount: Deactivated successfully.
Dec 05 10:24:49 compute-0 podman[276571]: 2025-12-05 10:24:49.83580481 +0000 UTC m=+0.618701052 container remove 39047b6212823a89b1dbe439fc7a2c59b4b517bd1df7553803738a3057021b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_solomon, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 10:24:49 compute-0 systemd[1]: libpod-conmon-39047b6212823a89b1dbe439fc7a2c59b4b517bd1df7553803738a3057021b10.scope: Deactivated successfully.
Dec 05 10:24:49 compute-0 sudo[276465]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:49 compute-0 sudo[276616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:24:49 compute-0 sudo[276616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:49 compute-0 sudo[276616]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:50 compute-0 sudo[276641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:24:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:50.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:50 compute-0 sudo[276641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:50 compute-0 ceph-mon[74418]: pgmap v1071: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.5 KiB/s wr, 59 op/s
Dec 05 10:24:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:24:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:50.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:50 compute-0 podman[276708]: 2025-12-05 10:24:50.496507964 +0000 UTC m=+0.051273357 container create 1995872e3f4dc8a15bf5f7aa178b44db065975c740ba6537f897be874ef80fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 10:24:50 compute-0 systemd[1]: Started libpod-conmon-1995872e3f4dc8a15bf5f7aa178b44db065975c740ba6537f897be874ef80fdc.scope.
Dec 05 10:24:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:24:50 compute-0 podman[276708]: 2025-12-05 10:24:50.472791798 +0000 UTC m=+0.027557201 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:24:50 compute-0 podman[276708]: 2025-12-05 10:24:50.578059342 +0000 UTC m=+0.132824765 container init 1995872e3f4dc8a15bf5f7aa178b44db065975c740ba6537f897be874ef80fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 10:24:50 compute-0 podman[276708]: 2025-12-05 10:24:50.588019313 +0000 UTC m=+0.142784716 container start 1995872e3f4dc8a15bf5f7aa178b44db065975c740ba6537f897be874ef80fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 10:24:50 compute-0 podman[276708]: 2025-12-05 10:24:50.591792206 +0000 UTC m=+0.146557579 container attach 1995872e3f4dc8a15bf5f7aa178b44db065975c740ba6537f897be874ef80fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 10:24:50 compute-0 dreamy_germain[276725]: 167 167
Dec 05 10:24:50 compute-0 systemd[1]: libpod-1995872e3f4dc8a15bf5f7aa178b44db065975c740ba6537f897be874ef80fdc.scope: Deactivated successfully.
Dec 05 10:24:50 compute-0 conmon[276725]: conmon 1995872e3f4dc8a15bf5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1995872e3f4dc8a15bf5f7aa178b44db065975c740ba6537f897be874ef80fdc.scope/container/memory.events
Dec 05 10:24:50 compute-0 podman[276708]: 2025-12-05 10:24:50.597552053 +0000 UTC m=+0.152317416 container died 1995872e3f4dc8a15bf5f7aa178b44db065975c740ba6537f897be874ef80fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:24:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a74dff554eb59ed971643d7e7a6a72c0f28534482fb859da23db57c68c388a7-merged.mount: Deactivated successfully.
Dec 05 10:24:50 compute-0 podman[276708]: 2025-12-05 10:24:50.635535955 +0000 UTC m=+0.190301318 container remove 1995872e3f4dc8a15bf5f7aa178b44db065975c740ba6537f897be874ef80fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:24:50 compute-0 systemd[1]: libpod-conmon-1995872e3f4dc8a15bf5f7aa178b44db065975c740ba6537f897be874ef80fdc.scope: Deactivated successfully.
Dec 05 10:24:50 compute-0 podman[276747]: 2025-12-05 10:24:50.857046511 +0000 UTC m=+0.055826078 container create 909edf3fe4453be8eb7161a61c78b5103d6801a26303f03cb3cda30fa504351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:24:50 compute-0 systemd[1]: Started libpod-conmon-909edf3fe4453be8eb7161a61c78b5103d6801a26303f03cb3cda30fa504351f.scope.
Dec 05 10:24:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:24:50 compute-0 podman[276747]: 2025-12-05 10:24:50.836219655 +0000 UTC m=+0.034999232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:24:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613402e9e5372d6746cceff6f31dac668c63cd0bff5826723ff077fa0dbcbebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613402e9e5372d6746cceff6f31dac668c63cd0bff5826723ff077fa0dbcbebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613402e9e5372d6746cceff6f31dac668c63cd0bff5826723ff077fa0dbcbebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613402e9e5372d6746cceff6f31dac668c63cd0bff5826723ff077fa0dbcbebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:50 compute-0 podman[276747]: 2025-12-05 10:24:50.947851322 +0000 UTC m=+0.146630899 container init 909edf3fe4453be8eb7161a61c78b5103d6801a26303f03cb3cda30fa504351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:24:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:50 compute-0 podman[276747]: 2025-12-05 10:24:50.962766648 +0000 UTC m=+0.161546205 container start 909edf3fe4453be8eb7161a61c78b5103d6801a26303f03cb3cda30fa504351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_bhabha, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:24:50 compute-0 podman[276747]: 2025-12-05 10:24:50.967088996 +0000 UTC m=+0.165868583 container attach 909edf3fe4453be8eb7161a61c78b5103d6801a26303f03cb3cda30fa504351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]: {
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:     "1": [
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:         {
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             "devices": [
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "/dev/loop3"
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             ],
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             "lv_name": "ceph_lv0",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             "lv_size": "21470642176",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             "name": "ceph_lv0",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             "tags": {
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.cluster_name": "ceph",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.crush_device_class": "",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.encrypted": "0",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.osd_id": "1",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.type": "block",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.vdo": "0",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:                 "ceph.with_tpm": "0"
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             },
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             "type": "block",
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:             "vg_name": "ceph_vg0"
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:         }
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]:     ]
Dec 05 10:24:51 compute-0 nervous_bhabha[276764]: }
Dec 05 10:24:51 compute-0 systemd[1]: libpod-909edf3fe4453be8eb7161a61c78b5103d6801a26303f03cb3cda30fa504351f.scope: Deactivated successfully.
Dec 05 10:24:51 compute-0 podman[276747]: 2025-12-05 10:24:51.310500987 +0000 UTC m=+0.509280544 container died 909edf3fe4453be8eb7161a61c78b5103d6801a26303f03cb3cda30fa504351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:24:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-613402e9e5372d6746cceff6f31dac668c63cd0bff5826723ff077fa0dbcbebd-merged.mount: Deactivated successfully.
Dec 05 10:24:51 compute-0 podman[276747]: 2025-12-05 10:24:51.361338611 +0000 UTC m=+0.560118168 container remove 909edf3fe4453be8eb7161a61c78b5103d6801a26303f03cb3cda30fa504351f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 10:24:51 compute-0 systemd[1]: libpod-conmon-909edf3fe4453be8eb7161a61c78b5103d6801a26303f03cb3cda30fa504351f.scope: Deactivated successfully.
Dec 05 10:24:51 compute-0 sudo[276641]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:51 compute-0 sudo[276785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:24:51 compute-0 sudo[276785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:51 compute-0 sudo[276785]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:51 compute-0 sudo[276810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:24:51 compute-0 sudo[276810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:51 compute-0 nova_compute[257087]: 2025-12-05 10:24:51.690 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:24:52 compute-0 podman[276877]: 2025-12-05 10:24:52.026325291 +0000 UTC m=+0.046165307 container create 3619fb04a787b5d508de81cbb36877c86df3462ac30ebbe2559b8059b497f417 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 10:24:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:52.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:52 compute-0 systemd[1]: Started libpod-conmon-3619fb04a787b5d508de81cbb36877c86df3462ac30ebbe2559b8059b497f417.scope.
Dec 05 10:24:52 compute-0 podman[276877]: 2025-12-05 10:24:52.003642074 +0000 UTC m=+0.023482100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:24:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:24:52 compute-0 podman[276877]: 2025-12-05 10:24:52.131408029 +0000 UTC m=+0.151248055 container init 3619fb04a787b5d508de81cbb36877c86df3462ac30ebbe2559b8059b497f417 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 05 10:24:52 compute-0 podman[276877]: 2025-12-05 10:24:52.1398733 +0000 UTC m=+0.159713306 container start 3619fb04a787b5d508de81cbb36877c86df3462ac30ebbe2559b8059b497f417 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:24:52 compute-0 podman[276877]: 2025-12-05 10:24:52.142899943 +0000 UTC m=+0.162739979 container attach 3619fb04a787b5d508de81cbb36877c86df3462ac30ebbe2559b8059b497f417 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 10:24:52 compute-0 infallible_khayyam[276893]: 167 167
Dec 05 10:24:52 compute-0 systemd[1]: libpod-3619fb04a787b5d508de81cbb36877c86df3462ac30ebbe2559b8059b497f417.scope: Deactivated successfully.
Dec 05 10:24:52 compute-0 podman[276898]: 2025-12-05 10:24:52.199482422 +0000 UTC m=+0.028727363 container died 3619fb04a787b5d508de81cbb36877c86df3462ac30ebbe2559b8059b497f417 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 10:24:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2dfe658122ac61db920bd20d3488e765298e47ee0b0d60c75ae31bd66169f64-merged.mount: Deactivated successfully.
Dec 05 10:24:52 compute-0 podman[276898]: 2025-12-05 10:24:52.237111295 +0000 UTC m=+0.066356246 container remove 3619fb04a787b5d508de81cbb36877c86df3462ac30ebbe2559b8059b497f417 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 10:24:52 compute-0 systemd[1]: libpod-conmon-3619fb04a787b5d508de81cbb36877c86df3462ac30ebbe2559b8059b497f417.scope: Deactivated successfully.
Dec 05 10:24:52 compute-0 ceph-mon[74418]: pgmap v1072: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:24:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:24:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:52.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:52 compute-0 podman[276921]: 2025-12-05 10:24:52.447951741 +0000 UTC m=+0.065112522 container create b97a0faa4afa7f11d248f8b49a4e4150d4f379997cca2228452511c7666e2ddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 10:24:52 compute-0 systemd[1]: Started libpod-conmon-b97a0faa4afa7f11d248f8b49a4e4150d4f379997cca2228452511c7666e2ddc.scope.
Dec 05 10:24:52 compute-0 podman[276921]: 2025-12-05 10:24:52.413086313 +0000 UTC m=+0.030247144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:24:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f03ce071c480a61d0ca6db047fdddc527aa084b4dc3db4f2b1e3f030f94c67c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f03ce071c480a61d0ca6db047fdddc527aa084b4dc3db4f2b1e3f030f94c67c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f03ce071c480a61d0ca6db047fdddc527aa084b4dc3db4f2b1e3f030f94c67c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f03ce071c480a61d0ca6db047fdddc527aa084b4dc3db4f2b1e3f030f94c67c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:52 compute-0 podman[276921]: 2025-12-05 10:24:52.80409214 +0000 UTC m=+0.421252941 container init b97a0faa4afa7f11d248f8b49a4e4150d4f379997cca2228452511c7666e2ddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 10:24:52 compute-0 podman[276921]: 2025-12-05 10:24:52.811549962 +0000 UTC m=+0.428710713 container start b97a0faa4afa7f11d248f8b49a4e4150d4f379997cca2228452511c7666e2ddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:24:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:53 compute-0 podman[276921]: 2025-12-05 10:24:53.219339226 +0000 UTC m=+0.836499987 container attach b97a0faa4afa7f11d248f8b49a4e4150d4f379997cca2228452511c7666e2ddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 10:24:53 compute-0 lvm[277014]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:24:53 compute-0 lvm[277014]: VG ceph_vg0 finished
Dec 05 10:24:53 compute-0 suspicious_mclaren[276939]: {}
Dec 05 10:24:53 compute-0 systemd[1]: libpod-b97a0faa4afa7f11d248f8b49a4e4150d4f379997cca2228452511c7666e2ddc.scope: Deactivated successfully.
Dec 05 10:24:53 compute-0 systemd[1]: libpod-b97a0faa4afa7f11d248f8b49a4e4150d4f379997cca2228452511c7666e2ddc.scope: Consumed 1.409s CPU time.
Dec 05 10:24:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:53.754Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:53 compute-0 podman[277018]: 2025-12-05 10:24:53.780338717 +0000 UTC m=+0.040022279 container died b97a0faa4afa7f11d248f8b49a4e4150d4f379997cca2228452511c7666e2ddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:24:53 compute-0 ceph-mon[74418]: pgmap v1073: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:24:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:54.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f03ce071c480a61d0ca6db047fdddc527aa084b4dc3db4f2b1e3f030f94c67c-merged.mount: Deactivated successfully.
Dec 05 10:24:54 compute-0 podman[277018]: 2025-12-05 10:24:54.192482749 +0000 UTC m=+0.452166271 container remove b97a0faa4afa7f11d248f8b49a4e4150d4f379997cca2228452511c7666e2ddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mclaren, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 05 10:24:54 compute-0 systemd[1]: libpod-conmon-b97a0faa4afa7f11d248f8b49a4e4150d4f379997cca2228452511c7666e2ddc.scope: Deactivated successfully.
Dec 05 10:24:54 compute-0 sudo[276810]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:24:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:24:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Dec 05 10:24:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:54.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:54 compute-0 sudo[277034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:24:54 compute-0 sudo[277034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:54 compute-0 sudo[277034]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:55 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:24:55 compute-0 ceph-mon[74418]: pgmap v1074: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Dec 05 10:24:55 compute-0 sudo[277060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:24:55 compute-0 sudo[277060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:24:55 compute-0 sudo[277060]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:55] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:24:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:24:55] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Dec 05 10:24:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:56.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:24:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:56.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:56 compute-0 nova_compute[257087]: 2025-12-05 10:24:56.695 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:24:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:57.462Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:24:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:24:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:24:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:24:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:24:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:24:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:24:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:24:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:24:57 compute-0 ceph-mon[74418]: pgmap v1075: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:24:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/4266465745' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:24:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/4266465745' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:24:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:24:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:24:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:24:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:24:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:24:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:24:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:24:58.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:24:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:24:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:24:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:24:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:24:58.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:24:58 compute-0 podman[277088]: 2025-12-05 10:24:58.444506841 +0000 UTC m=+0.099638692 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:24:58 compute-0 podman[277090]: 2025-12-05 10:24:58.454571736 +0000 UTC m=+0.106444987 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 10:24:58 compute-0 podman[277089]: 2025-12-05 10:24:58.483434381 +0000 UTC m=+0.138522680 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 05 10:24:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:24:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:24:58.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:00.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:00 compute-0 ceph-mon[74418]: pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:25:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:25:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:00.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:01 compute-0 nova_compute[257087]: 2025-12-05 10:25:01.698 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:01 compute-0 nova_compute[257087]: 2025-12-05 10:25:01.701 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:01 compute-0 nova_compute[257087]: 2025-12-05 10:25:01.701 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:25:01 compute-0 nova_compute[257087]: 2025-12-05 10:25:01.702 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:01 compute-0 nova_compute[257087]: 2025-12-05 10:25:01.749 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:01 compute-0 nova_compute[257087]: 2025-12-05 10:25:01.750 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:25:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:02.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:25:02 compute-0 ceph-mon[74418]: pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:25:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:25:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:02.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:03.756Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:04.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:04 compute-0 ceph-mon[74418]: pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:25:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:25:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:25:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:04.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:25:04 compute-0 nova_compute[257087]: 2025-12-05 10:25:04.532 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:05] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:25:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:05] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:25:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:06.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:25:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:06.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:06 compute-0 nova_compute[257087]: 2025-12-05 10:25:06.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:06 compute-0 nova_compute[257087]: 2025-12-05 10:25:06.801 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:07.463Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:25:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:07.463Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:25:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:07.463Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:07 compute-0 ceph-mon[74418]: pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:25:07 compute-0 nova_compute[257087]: 2025-12-05 10:25:07.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:25:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:08.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:25:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:25:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:08.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:08 compute-0 ceph-mon[74418]: pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:25:08 compute-0 nova_compute[257087]: 2025-12-05 10:25:08.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:08.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:09 compute-0 ceph-mon[74418]: pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:25:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2721727917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:25:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4161384147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:25:09 compute-0 nova_compute[257087]: 2025-12-05 10:25:09.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:09 compute-0 nova_compute[257087]: 2025-12-05 10:25:09.560 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:25:09 compute-0 nova_compute[257087]: 2025-12-05 10:25:09.562 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:25:09 compute-0 nova_compute[257087]: 2025-12-05 10:25:09.562 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:25:09 compute-0 nova_compute[257087]: 2025-12-05 10:25:09.563 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:25:09 compute-0 nova_compute[257087]: 2025-12-05 10:25:09.564 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:25:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:10.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:25:10 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/84167833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:25:10 compute-0 nova_compute[257087]: 2025-12-05 10:25:10.101 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:25:10 compute-0 nova_compute[257087]: 2025-12-05 10:25:10.269 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:25:10 compute-0 nova_compute[257087]: 2025-12-05 10:25:10.271 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4581MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:25:10 compute-0 nova_compute[257087]: 2025-12-05 10:25:10.271 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:25:10 compute-0 nova_compute[257087]: 2025-12-05 10:25:10.272 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:25:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:25:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:10.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:10 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/84167833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:25:10 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1539258198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:25:10 compute-0 nova_compute[257087]: 2025-12-05 10:25:10.575 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:25:10 compute-0 nova_compute[257087]: 2025-12-05 10:25:10.575 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:25:10 compute-0 nova_compute[257087]: 2025-12-05 10:25:10.598 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:25:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:25:11 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317687988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:25:11 compute-0 nova_compute[257087]: 2025-12-05 10:25:11.061 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:25:11 compute-0 nova_compute[257087]: 2025-12-05 10:25:11.067 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:25:11 compute-0 nova_compute[257087]: 2025-12-05 10:25:11.086 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:25:11 compute-0 nova_compute[257087]: 2025-12-05 10:25:11.087 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:25:11 compute-0 nova_compute[257087]: 2025-12-05 10:25:11.088 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:25:11 compute-0 ceph-mon[74418]: pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:25:11 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2406199953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:25:11 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2317687988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:25:11 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3195429994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:25:11 compute-0 nova_compute[257087]: 2025-12-05 10:25:11.804 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:12.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:12 compute-0 nova_compute[257087]: 2025-12-05 10:25:12.087 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:12 compute-0 nova_compute[257087]: 2025-12-05 10:25:12.088 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:25:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:12.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:12 compute-0 nova_compute[257087]: 2025-12-05 10:25:12.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:12 compute-0 nova_compute[257087]: 2025-12-05 10:25:12.528 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:25:12 compute-0 nova_compute[257087]: 2025-12-05 10:25:12.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:25:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:25:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:25:13 compute-0 nova_compute[257087]: 2025-12-05 10:25:13.281 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:25:13 compute-0 nova_compute[257087]: 2025-12-05 10:25:13.284 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:13 compute-0 nova_compute[257087]: 2025-12-05 10:25:13.285 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:25:13 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/839314596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:25:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:13.758Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:14.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:14 compute-0 ceph-mon[74418]: pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:25:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/957137320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 10:25:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:25:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 88 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:25:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:14.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:15 compute-0 ceph-mon[74418]: pgmap v1084: 353 pgs: 353 active+clean; 88 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:25:15 compute-0 sudo[277208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:25:15 compute-0 sudo[277208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:15 compute-0 sudo[277208]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:15] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Dec 05 10:25:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:15] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Dec 05 10:25:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:16.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 88 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:25:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:16.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:16 compute-0 nova_compute[257087]: 2025-12-05 10:25:16.806 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:16 compute-0 nova_compute[257087]: 2025-12-05 10:25:16.809 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:16 compute-0 nova_compute[257087]: 2025-12-05 10:25:16.809 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:25:16 compute-0 nova_compute[257087]: 2025-12-05 10:25:16.809 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:16 compute-0 nova_compute[257087]: 2025-12-05 10:25:16.851 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:16 compute-0 nova_compute[257087]: 2025-12-05 10:25:16.852 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:17 compute-0 ceph-mon[74418]: pgmap v1085: 353 pgs: 353 active+clean; 88 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:25:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:17.464Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:18.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 88 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:25:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:18.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:18.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:19 compute-0 ceph-mon[74418]: pgmap v1086: 353 pgs: 353 active+clean; 88 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 05 10:25:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:20.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Dec 05 10:25:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:20.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:25:20.583 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:25:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:25:20.584 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:25:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:25:20.584 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:25:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:21 compute-0 ceph-mon[74418]: pgmap v1087: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Dec 05 10:25:21 compute-0 nova_compute[257087]: 2025-12-05 10:25:21.853 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:21 compute-0 nova_compute[257087]: 2025-12-05 10:25:21.855 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:21 compute-0 nova_compute[257087]: 2025-12-05 10:25:21.855 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:25:21 compute-0 nova_compute[257087]: 2025-12-05 10:25:21.855 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:21 compute-0 nova_compute[257087]: 2025-12-05 10:25:21.891 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:21 compute-0 nova_compute[257087]: 2025-12-05 10:25:21.892 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:22.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 05 10:25:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:22.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:23.759Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:23 compute-0 ceph-mon[74418]: pgmap v1088: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 05 10:25:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:24.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 05 10:25:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:24.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.520197) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930325520795, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2146, "num_deletes": 251, "total_data_size": 4463509, "memory_usage": 4530016, "flush_reason": "Manual Compaction"}
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930325635136, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4326561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29642, "largest_seqno": 31787, "table_properties": {"data_size": 4316511, "index_size": 6480, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20440, "raw_average_key_size": 20, "raw_value_size": 4296671, "raw_average_value_size": 4331, "num_data_blocks": 273, "num_entries": 992, "num_filter_entries": 992, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930100, "oldest_key_time": 1764930100, "file_creation_time": 1764930325, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 114894 microseconds, and 15299 cpu microseconds.
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:25:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:25] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Dec 05 10:25:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:25] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.635486) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4326561 bytes OK
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.635702) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.676636) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.676699) EVENT_LOG_v1 {"time_micros": 1764930325676686, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.676733) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4454644, prev total WAL file size 4454644, number of live WAL files 2.
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.678330) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(4225KB)], [65(12MB)]
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930325678445, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 17237835, "oldest_snapshot_seqno": -1}
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6389 keys, 14982749 bytes, temperature: kUnknown
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930325807783, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14982749, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14939621, "index_size": 26058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16005, "raw_key_size": 163915, "raw_average_key_size": 25, "raw_value_size": 14824175, "raw_average_value_size": 2320, "num_data_blocks": 1042, "num_entries": 6389, "num_filter_entries": 6389, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764930325, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.808218) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14982749 bytes
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.929407) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.1 rd, 115.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.1, 12.3 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(7.4) write-amplify(3.5) OK, records in: 6910, records dropped: 521 output_compression: NoCompression
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.929457) EVENT_LOG_v1 {"time_micros": 1764930325929439, "job": 36, "event": "compaction_finished", "compaction_time_micros": 129491, "compaction_time_cpu_micros": 45285, "output_level": 6, "num_output_files": 1, "total_output_size": 14982749, "num_input_records": 6910, "num_output_records": 6389, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930325930657, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930325933708, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.678208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.933778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.933785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.933787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.933789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:25:25 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:25:25.933790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:25:25 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:26.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:26 compute-0 ceph-mon[74418]: pgmap v1089: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 05 10:25:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:25:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:26.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:26 compute-0 nova_compute[257087]: 2025-12-05 10:25:26.892 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:26 compute-0 nova_compute[257087]: 2025-12-05 10:25:26.895 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:27.464Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:27 compute-0 ceph-mon[74418]: pgmap v1090: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:25:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:25:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:25:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:25:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:25:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:25:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:25:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:25:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:25:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:25:27
Dec 05 10:25:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:25:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:25:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'backups', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.nfs', 'volumes', '.rgw.root', 'images']
Dec 05 10:25:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:25:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:25:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:28.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:25:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:25:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:28.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:28.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:25:29 compute-0 podman[277249]: 2025-12-05 10:25:29.40125995 +0000 UTC m=+0.059327535 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:25:29 compute-0 podman[277247]: 2025-12-05 10:25:29.423844945 +0000 UTC m=+0.087348698 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 10:25:29 compute-0 podman[277248]: 2025-12-05 10:25:29.456557744 +0000 UTC m=+0.117399485 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:25:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:30.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec 05 10:25:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:30.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:30 compute-0 ceph-mon[74418]: pgmap v1091: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 05 10:25:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:31 compute-0 nova_compute[257087]: 2025-12-05 10:25:31.895 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:31 compute-0 nova_compute[257087]: 2025-12-05 10:25:31.896 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:31 compute-0 ceph-mon[74418]: pgmap v1092: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec 05 10:25:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:32.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 639 KiB/s rd, 21 op/s
Dec 05 10:25:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:32.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:33.759Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:34.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:34 compute-0 ceph-mon[74418]: pgmap v1093: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 639 KiB/s rd, 21 op/s
Dec 05 10:25:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 115 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 908 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec 05 10:25:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:25:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:34.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:25:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:35] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec 05 10:25:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:35] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec 05 10:25:35 compute-0 sudo[277317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:25:35 compute-0 sudo[277317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:35 compute-0 sudo[277317]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:36.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:36 compute-0 ceph-mon[74418]: pgmap v1094: 353 pgs: 353 active+clean; 115 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 908 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec 05 10:25:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 115 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Dec 05 10:25:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:36.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:36 compute-0 nova_compute[257087]: 2025-12-05 10:25:36.897 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:36 compute-0 nova_compute[257087]: 2025-12-05 10:25:36.899 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:36 compute-0 nova_compute[257087]: 2025-12-05 10:25:36.899 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:25:36 compute-0 nova_compute[257087]: 2025-12-05 10:25:36.899 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:36 compute-0 nova_compute[257087]: 2025-12-05 10:25:36.945 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:36 compute-0 nova_compute[257087]: 2025-12-05 10:25:36.946 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:25:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.9 total, 600.0 interval
                                           Cumulative writes: 12K writes, 44K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3681 syncs, 3.41 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2310 writes, 7161 keys, 2310 commit groups, 1.0 writes per commit group, ingest: 7.02 MB, 0.01 MB/s
                                           Interval WAL: 2310 writes, 1005 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:25:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:37.466Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:37 compute-0 ceph-mon[74418]: pgmap v1095: 353 pgs: 353 active+clean; 115 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Dec 05 10:25:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:38.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 115 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Dec 05 10:25:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:38.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:38.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:39 compute-0 ceph-mon[74418]: pgmap v1096: 353 pgs: 353 active+clean; 115 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Dec 05 10:25:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:40.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 05 10:25:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:40.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:41 compute-0 nova_compute[257087]: 2025-12-05 10:25:41.947 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:41 compute-0 nova_compute[257087]: 2025-12-05 10:25:41.948 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:41 compute-0 nova_compute[257087]: 2025-12-05 10:25:41.949 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:25:41 compute-0 nova_compute[257087]: 2025-12-05 10:25:41.949 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:42 compute-0 nova_compute[257087]: 2025-12-05 10:25:42.022 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:42 compute-0 nova_compute[257087]: 2025-12-05 10:25:42.023 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:42 compute-0 ceph-mon[74418]: pgmap v1097: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 05 10:25:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:42.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 05 10:25:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:42.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:25:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:25:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:25:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:43.761Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:44.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:44 compute-0 ceph-mon[74418]: pgmap v1098: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 05 10:25:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:25:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:44.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:45 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:25:45.096 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:25:45 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:25:45.099 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:25:45 compute-0 nova_compute[257087]: 2025-12-05 10:25:45.098 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:45] "GET /metrics HTTP/1.1" 200 48566 "" "Prometheus/2.51.0"
Dec 05 10:25:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:45] "GET /metrics HTTP/1.1" 200 48566 "" "Prometheus/2.51.0"
Dec 05 10:25:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:46.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 19 KiB/s wr, 6 op/s
Dec 05 10:25:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:46.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:47 compute-0 ceph-mon[74418]: pgmap v1099: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 05 10:25:47 compute-0 nova_compute[257087]: 2025-12-05 10:25:47.077 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:47.468Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:48.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 19 KiB/s wr, 6 op/s
Dec 05 10:25:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:48.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:48 compute-0 ceph-mon[74418]: pgmap v1100: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 19 KiB/s wr, 6 op/s
Dec 05 10:25:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:48.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:49 compute-0 ceph-mon[74418]: pgmap v1101: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 19 KiB/s wr, 6 op/s
Dec 05 10:25:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:25:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:50.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:25:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 19 KiB/s wr, 6 op/s
Dec 05 10:25:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:50.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:52 compute-0 nova_compute[257087]: 2025-12-05 10:25:52.079 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:52 compute-0 nova_compute[257087]: 2025-12-05 10:25:52.081 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:25:52 compute-0 nova_compute[257087]: 2025-12-05 10:25:52.081 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:25:52 compute-0 nova_compute[257087]: 2025-12-05 10:25:52.081 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:52 compute-0 nova_compute[257087]: 2025-12-05 10:25:52.117 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:52 compute-0 nova_compute[257087]: 2025-12-05 10:25:52.118 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:25:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:52.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 13 KiB/s wr, 1 op/s
Dec 05 10:25:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:52.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:52 compute-0 ceph-mon[74418]: pgmap v1102: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 19 KiB/s wr, 6 op/s
Dec 05 10:25:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3733244298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:25:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:53 compute-0 ceph-mon[74418]: pgmap v1103: 353 pgs: 353 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 13 KiB/s wr, 1 op/s
Dec 05 10:25:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:53.763Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:54.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Dec 05 10:25:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:54.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:54 compute-0 sudo[277363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:25:54 compute-0 sudo[277363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:54 compute-0 sudo[277363]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:54 compute-0 sudo[277388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 05 10:25:54 compute-0 sudo[277388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 10:25:55 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:25:55.101 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:25:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 10:25:55 compute-0 sudo[277388]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:25:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:25:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:55 compute-0 sudo[277435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:25:55 compute-0 sudo[277435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:55 compute-0 sudo[277435]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:55 compute-0 sudo[277460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:25:55 compute-0 sudo[277460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:55] "GET /metrics HTTP/1.1" 200 48566 "" "Prometheus/2.51.0"
Dec 05 10:25:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:25:55] "GET /metrics HTTP/1.1" 200 48566 "" "Prometheus/2.51.0"
Dec 05 10:25:55 compute-0 sudo[277501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:25:55 compute-0 sudo[277501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:55 compute-0 sudo[277501]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:55 compute-0 sudo[277460]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:25:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:25:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:25:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:25:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:25:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:25:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:25:55 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:25:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:25:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:25:56 compute-0 sudo[277540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:25:56 compute-0 sudo[277540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:56 compute-0 sudo[277540]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:56 compute-0 sudo[277565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:25:56 compute-0 sudo[277565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:56 compute-0 ceph-mon[74418]: pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Dec 05 10:25:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:25:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:25:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:25:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:25:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:25:56 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:25:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:56.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:56.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:56 compute-0 podman[277633]: 2025-12-05 10:25:56.591320432 +0000 UTC m=+0.049836446 container create e7fb029c4de121bc19941eaec68a70fb797ce80b7988dde973d9f008857f2633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:25:56 compute-0 systemd[1]: Started libpod-conmon-e7fb029c4de121bc19941eaec68a70fb797ce80b7988dde973d9f008857f2633.scope.
Dec 05 10:25:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:25:56 compute-0 podman[277633]: 2025-12-05 10:25:56.569417997 +0000 UTC m=+0.027934041 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:25:56 compute-0 podman[277633]: 2025-12-05 10:25:56.685154166 +0000 UTC m=+0.143670200 container init e7fb029c4de121bc19941eaec68a70fb797ce80b7988dde973d9f008857f2633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 10:25:56 compute-0 podman[277633]: 2025-12-05 10:25:56.696772541 +0000 UTC m=+0.155288545 container start e7fb029c4de121bc19941eaec68a70fb797ce80b7988dde973d9f008857f2633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 10:25:56 compute-0 podman[277633]: 2025-12-05 10:25:56.700996957 +0000 UTC m=+0.159512991 container attach e7fb029c4de121bc19941eaec68a70fb797ce80b7988dde973d9f008857f2633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:25:56 compute-0 hardcore_maxwell[277650]: 167 167
Dec 05 10:25:56 compute-0 systemd[1]: libpod-e7fb029c4de121bc19941eaec68a70fb797ce80b7988dde973d9f008857f2633.scope: Deactivated successfully.
Dec 05 10:25:56 compute-0 podman[277633]: 2025-12-05 10:25:56.707691809 +0000 UTC m=+0.166207853 container died e7fb029c4de121bc19941eaec68a70fb797ce80b7988dde973d9f008857f2633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:25:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0631432c2394a00cd85e8721f7479d09dc099cfecfad6e838ba561d58592cec9-merged.mount: Deactivated successfully.
Dec 05 10:25:56 compute-0 podman[277633]: 2025-12-05 10:25:56.749598189 +0000 UTC m=+0.208114203 container remove e7fb029c4de121bc19941eaec68a70fb797ce80b7988dde973d9f008857f2633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_maxwell, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:25:56 compute-0 systemd[1]: libpod-conmon-e7fb029c4de121bc19941eaec68a70fb797ce80b7988dde973d9f008857f2633.scope: Deactivated successfully.
Dec 05 10:25:56 compute-0 podman[277674]: 2025-12-05 10:25:56.94775365 +0000 UTC m=+0.051173934 container create 268ce9dbf5d333ab94484544c8d7d8ec3e74b0eaf5b8e149ed4068ddffda57f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:25:56 compute-0 systemd[1]: Started libpod-conmon-268ce9dbf5d333ab94484544c8d7d8ec3e74b0eaf5b8e149ed4068ddffda57f0.scope.
Dec 05 10:25:57 compute-0 podman[277674]: 2025-12-05 10:25:56.924883937 +0000 UTC m=+0.028304241 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:25:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c64b29fa9ae916c05e08274f06cb0eb3439467195c6b450f70a7f97b6f7412/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c64b29fa9ae916c05e08274f06cb0eb3439467195c6b450f70a7f97b6f7412/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c64b29fa9ae916c05e08274f06cb0eb3439467195c6b450f70a7f97b6f7412/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c64b29fa9ae916c05e08274f06cb0eb3439467195c6b450f70a7f97b6f7412/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c64b29fa9ae916c05e08274f06cb0eb3439467195c6b450f70a7f97b6f7412/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:25:57 compute-0 podman[277674]: 2025-12-05 10:25:57.049390554 +0000 UTC m=+0.152810868 container init 268ce9dbf5d333ab94484544c8d7d8ec3e74b0eaf5b8e149ed4068ddffda57f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 05 10:25:57 compute-0 podman[277674]: 2025-12-05 10:25:57.058175703 +0000 UTC m=+0.161595987 container start 268ce9dbf5d333ab94484544c8d7d8ec3e74b0eaf5b8e149ed4068ddffda57f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:25:57 compute-0 podman[277674]: 2025-12-05 10:25:57.062308166 +0000 UTC m=+0.165728450 container attach 268ce9dbf5d333ab94484544c8d7d8ec3e74b0eaf5b8e149ed4068ddffda57f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 10:25:57 compute-0 nova_compute[257087]: 2025-12-05 10:25:57.118 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:57 compute-0 ceph-mon[74418]: pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:25:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3741855983' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:25:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3741855983' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:25:57 compute-0 sad_benz[277690]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:25:57 compute-0 sad_benz[277690]: --> All data devices are unavailable
Dec 05 10:25:57 compute-0 systemd[1]: libpod-268ce9dbf5d333ab94484544c8d7d8ec3e74b0eaf5b8e149ed4068ddffda57f0.scope: Deactivated successfully.
Dec 05 10:25:57 compute-0 podman[277674]: 2025-12-05 10:25:57.470087449 +0000 UTC m=+0.573507733 container died 268ce9dbf5d333ab94484544c8d7d8ec3e74b0eaf5b8e149ed4068ddffda57f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:25:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:57.470Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1c64b29fa9ae916c05e08274f06cb0eb3439467195c6b450f70a7f97b6f7412-merged.mount: Deactivated successfully.
Dec 05 10:25:57 compute-0 podman[277674]: 2025-12-05 10:25:57.513965273 +0000 UTC m=+0.617385557 container remove 268ce9dbf5d333ab94484544c8d7d8ec3e74b0eaf5b8e149ed4068ddffda57f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_benz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 10:25:57 compute-0 systemd[1]: libpod-conmon-268ce9dbf5d333ab94484544c8d7d8ec3e74b0eaf5b8e149ed4068ddffda57f0.scope: Deactivated successfully.
Dec 05 10:25:57 compute-0 sudo[277565]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:25:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:25:57 compute-0 sudo[277716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:25:57 compute-0 sudo[277716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:57 compute-0 sudo[277716]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:25:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:25:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:25:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:25:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:25:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:25:57 compute-0 sudo[277741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:25:57 compute-0 sudo[277741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:25:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:25:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:25:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:25:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:25:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:25:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:25:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:25:58.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:25:58 compute-0 podman[277809]: 2025-12-05 10:25:58.191656379 +0000 UTC m=+0.051292627 container create caed7d5622a5073788bad84eb29d9ebc168175cab0441b9fcf0b7ef7aae039b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 10:25:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:25:58 compute-0 systemd[1]: Started libpod-conmon-caed7d5622a5073788bad84eb29d9ebc168175cab0441b9fcf0b7ef7aae039b0.scope.
Dec 05 10:25:58 compute-0 podman[277809]: 2025-12-05 10:25:58.170451672 +0000 UTC m=+0.030087950 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:25:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:25:58 compute-0 podman[277809]: 2025-12-05 10:25:58.286271552 +0000 UTC m=+0.145907820 container init caed7d5622a5073788bad84eb29d9ebc168175cab0441b9fcf0b7ef7aae039b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec 05 10:25:58 compute-0 podman[277809]: 2025-12-05 10:25:58.293841178 +0000 UTC m=+0.153477426 container start caed7d5622a5073788bad84eb29d9ebc168175cab0441b9fcf0b7ef7aae039b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_keller, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:25:58 compute-0 podman[277809]: 2025-12-05 10:25:58.298031283 +0000 UTC m=+0.157667741 container attach caed7d5622a5073788bad84eb29d9ebc168175cab0441b9fcf0b7ef7aae039b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_keller, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:25:58 compute-0 competent_keller[277827]: 167 167
Dec 05 10:25:58 compute-0 systemd[1]: libpod-caed7d5622a5073788bad84eb29d9ebc168175cab0441b9fcf0b7ef7aae039b0.scope: Deactivated successfully.
Dec 05 10:25:58 compute-0 podman[277809]: 2025-12-05 10:25:58.30198922 +0000 UTC m=+0.161625468 container died caed7d5622a5073788bad84eb29d9ebc168175cab0441b9fcf0b7ef7aae039b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_keller, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 10:25:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe6d27dccb5ec930214e2b57a878a9574de1043d3edbfafed74fbb8e071a1376-merged.mount: Deactivated successfully.
Dec 05 10:25:58 compute-0 podman[277809]: 2025-12-05 10:25:58.346515681 +0000 UTC m=+0.206151929 container remove caed7d5622a5073788bad84eb29d9ebc168175cab0441b9fcf0b7ef7aae039b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:25:58 compute-0 systemd[1]: libpod-conmon-caed7d5622a5073788bad84eb29d9ebc168175cab0441b9fcf0b7ef7aae039b0.scope: Deactivated successfully.
Dec 05 10:25:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:25:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:25:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:25:58.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:25:58 compute-0 podman[277850]: 2025-12-05 10:25:58.527163145 +0000 UTC m=+0.047638927 container create db1d5a818005c5bae76be92edc6da9e42261faeb99cd32922b30689a84d173d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_wiles, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:25:58 compute-0 systemd[1]: Started libpod-conmon-db1d5a818005c5bae76be92edc6da9e42261faeb99cd32922b30689a84d173d9.scope.
Dec 05 10:25:58 compute-0 podman[277850]: 2025-12-05 10:25:58.507052918 +0000 UTC m=+0.027528720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:25:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bead2060862f52de193f9581bc5daf5a89d0c156a6b65436b5ea1b2fec6f95ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bead2060862f52de193f9581bc5daf5a89d0c156a6b65436b5ea1b2fec6f95ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bead2060862f52de193f9581bc5daf5a89d0c156a6b65436b5ea1b2fec6f95ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bead2060862f52de193f9581bc5daf5a89d0c156a6b65436b5ea1b2fec6f95ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:25:58 compute-0 podman[277850]: 2025-12-05 10:25:58.643615314 +0000 UTC m=+0.164091106 container init db1d5a818005c5bae76be92edc6da9e42261faeb99cd32922b30689a84d173d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_wiles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 10:25:58 compute-0 podman[277850]: 2025-12-05 10:25:58.652583047 +0000 UTC m=+0.173058829 container start db1d5a818005c5bae76be92edc6da9e42261faeb99cd32922b30689a84d173d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_wiles, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:25:58 compute-0 podman[277850]: 2025-12-05 10:25:58.657012048 +0000 UTC m=+0.177487830 container attach db1d5a818005c5bae76be92edc6da9e42261faeb99cd32922b30689a84d173d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:25:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:25:58.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]: {
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:     "1": [
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:         {
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             "devices": [
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "/dev/loop3"
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             ],
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             "lv_name": "ceph_lv0",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             "lv_size": "21470642176",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             "name": "ceph_lv0",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             "tags": {
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.cluster_name": "ceph",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.crush_device_class": "",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.encrypted": "0",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.osd_id": "1",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.type": "block",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.vdo": "0",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:                 "ceph.with_tpm": "0"
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             },
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             "type": "block",
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:             "vg_name": "ceph_vg0"
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:         }
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]:     ]
Dec 05 10:25:58 compute-0 flamboyant_wiles[277866]: }
Dec 05 10:25:58 compute-0 systemd[1]: libpod-db1d5a818005c5bae76be92edc6da9e42261faeb99cd32922b30689a84d173d9.scope: Deactivated successfully.
Dec 05 10:25:58 compute-0 podman[277850]: 2025-12-05 10:25:58.993610395 +0000 UTC m=+0.514086177 container died db1d5a818005c5bae76be92edc6da9e42261faeb99cd32922b30689a84d173d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 10:25:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bead2060862f52de193f9581bc5daf5a89d0c156a6b65436b5ea1b2fec6f95ae-merged.mount: Deactivated successfully.
Dec 05 10:25:59 compute-0 podman[277850]: 2025-12-05 10:25:59.042549536 +0000 UTC m=+0.563025328 container remove db1d5a818005c5bae76be92edc6da9e42261faeb99cd32922b30689a84d173d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_wiles, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:25:59 compute-0 systemd[1]: libpod-conmon-db1d5a818005c5bae76be92edc6da9e42261faeb99cd32922b30689a84d173d9.scope: Deactivated successfully.
Dec 05 10:25:59 compute-0 sudo[277741]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:59 compute-0 sudo[277888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:25:59 compute-0 sudo[277888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:59 compute-0 sudo[277888]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:59 compute-0 ceph-mon[74418]: pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:25:59 compute-0 sudo[277913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:25:59 compute-0 sudo[277913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:25:59 compute-0 podman[277976]: 2025-12-05 10:25:59.681273542 +0000 UTC m=+0.025619408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:25:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Dec 05 10:26:00 compute-0 podman[277976]: 2025-12-05 10:26:00.131324246 +0000 UTC m=+0.475670082 container create 6413a05a0aa7f677b61eee42bffaf26528826573a4d4302165195320faf676f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_engelbart, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 05 10:26:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:26:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:00.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:26:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:00.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:00 compute-0 systemd[1]: Started libpod-conmon-6413a05a0aa7f677b61eee42bffaf26528826573a4d4302165195320faf676f2.scope.
Dec 05 10:26:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:26:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:01 compute-0 podman[277976]: 2025-12-05 10:26:01.667790824 +0000 UTC m=+2.012136680 container init 6413a05a0aa7f677b61eee42bffaf26528826573a4d4302165195320faf676f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_engelbart, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:26:01 compute-0 podman[277976]: 2025-12-05 10:26:01.684775496 +0000 UTC m=+2.029121352 container start 6413a05a0aa7f677b61eee42bffaf26528826573a4d4302165195320faf676f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 10:26:01 compute-0 lucid_engelbart[278026]: 167 167
Dec 05 10:26:01 compute-0 systemd[1]: libpod-6413a05a0aa7f677b61eee42bffaf26528826573a4d4302165195320faf676f2.scope: Deactivated successfully.
Dec 05 10:26:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:26:01 compute-0 ceph-mon[74418]: pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Dec 05 10:26:01 compute-0 podman[277976]: 2025-12-05 10:26:01.931807776 +0000 UTC m=+2.276153632 container attach 6413a05a0aa7f677b61eee42bffaf26528826573a4d4302165195320faf676f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_engelbart, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 10:26:01 compute-0 podman[277976]: 2025-12-05 10:26:01.932451833 +0000 UTC m=+2.276797669 container died 6413a05a0aa7f677b61eee42bffaf26528826573a4d4302165195320faf676f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:26:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b67505784729a4c7a1f2ea77e64be1796330c8d7319c1855155185f30c97db37-merged.mount: Deactivated successfully.
Dec 05 10:26:01 compute-0 podman[277976]: 2025-12-05 10:26:01.997634676 +0000 UTC m=+2.341980512 container remove 6413a05a0aa7f677b61eee42bffaf26528826573a4d4302165195320faf676f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 05 10:26:02 compute-0 podman[277991]: 2025-12-05 10:26:02.00034149 +0000 UTC m=+1.825204623 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 10:26:02 compute-0 systemd[1]: libpod-conmon-6413a05a0aa7f677b61eee42bffaf26528826573a4d4302165195320faf676f2.scope: Deactivated successfully.
Dec 05 10:26:02 compute-0 podman[277990]: 2025-12-05 10:26:02.010015144 +0000 UTC m=+1.834935469 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:26:02 compute-0 podman[277992]: 2025-12-05 10:26:02.079353579 +0000 UTC m=+1.899784972 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 10:26:02 compute-0 nova_compute[257087]: 2025-12-05 10:26:02.122 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:26:02 compute-0 nova_compute[257087]: 2025-12-05 10:26:02.124 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:26:02 compute-0 nova_compute[257087]: 2025-12-05 10:26:02.124 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:26:02 compute-0 nova_compute[257087]: 2025-12-05 10:26:02.124 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:26:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:02.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:02 compute-0 nova_compute[257087]: 2025-12-05 10:26:02.164 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:02 compute-0 nova_compute[257087]: 2025-12-05 10:26:02.164 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:26:02 compute-0 podman[278082]: 2025-12-05 10:26:02.175606448 +0000 UTC m=+0.030751677 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:26:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:02.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:03 compute-0 podman[278082]: 2025-12-05 10:26:03.210569053 +0000 UTC m=+1.065714262 container create b119cd3ba60e12da9828bdf71dcb6cbfb1acaa7b41260cd3abebbe4fbaa3ae7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:26:03 compute-0 ceph-mon[74418]: pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 05 10:26:03 compute-0 systemd[1]: Started libpod-conmon-b119cd3ba60e12da9828bdf71dcb6cbfb1acaa7b41260cd3abebbe4fbaa3ae7b.scope.
Dec 05 10:26:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e75b64cb780b7e36db72e8fd09c4a89a5f0f94a688ba07a1c4960e00941dd39a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e75b64cb780b7e36db72e8fd09c4a89a5f0f94a688ba07a1c4960e00941dd39a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e75b64cb780b7e36db72e8fd09c4a89a5f0f94a688ba07a1c4960e00941dd39a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e75b64cb780b7e36db72e8fd09c4a89a5f0f94a688ba07a1c4960e00941dd39a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:26:03 compute-0 podman[278082]: 2025-12-05 10:26:03.314412077 +0000 UTC m=+1.169557306 container init b119cd3ba60e12da9828bdf71dcb6cbfb1acaa7b41260cd3abebbe4fbaa3ae7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:26:03 compute-0 podman[278082]: 2025-12-05 10:26:03.32221433 +0000 UTC m=+1.177359539 container start b119cd3ba60e12da9828bdf71dcb6cbfb1acaa7b41260cd3abebbe4fbaa3ae7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 10:26:03 compute-0 podman[278082]: 2025-12-05 10:26:03.325984102 +0000 UTC m=+1.181129311 container attach b119cd3ba60e12da9828bdf71dcb6cbfb1acaa7b41260cd3abebbe4fbaa3ae7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_antonelli, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 10:26:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:03.764Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:26:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:03.766Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:26:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Dec 05 10:26:04 compute-0 lvm[278175]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:26:04 compute-0 lvm[278175]: VG ceph_vg0 finished
Dec 05 10:26:04 compute-0 friendly_antonelli[278101]: {}
Dec 05 10:26:04 compute-0 systemd[1]: libpod-b119cd3ba60e12da9828bdf71dcb6cbfb1acaa7b41260cd3abebbe4fbaa3ae7b.scope: Deactivated successfully.
Dec 05 10:26:04 compute-0 systemd[1]: libpod-b119cd3ba60e12da9828bdf71dcb6cbfb1acaa7b41260cd3abebbe4fbaa3ae7b.scope: Consumed 1.287s CPU time.
Dec 05 10:26:04 compute-0 podman[278082]: 2025-12-05 10:26:04.114931325 +0000 UTC m=+1.970076534 container died b119cd3ba60e12da9828bdf71dcb6cbfb1acaa7b41260cd3abebbe4fbaa3ae7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_antonelli, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:26:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:04.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:04.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:04 compute-0 nova_compute[257087]: 2025-12-05 10:26:04.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:05] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec 05 10:26:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:05] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec 05 10:26:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:26:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:06.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:06.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:07 compute-0 nova_compute[257087]: 2025-12-05 10:26:07.166 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:26:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:07.472Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:08.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:08.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:08 compute-0 nova_compute[257087]: 2025-12-05 10:26:08.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:08 compute-0 nova_compute[257087]: 2025-12-05 10:26:08.527 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:08.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:26:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:08.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:26:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:08.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:26:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:10.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:26:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:10.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:26:10 compute-0 nova_compute[257087]: 2025-12-05 10:26:10.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e75b64cb780b7e36db72e8fd09c4a89a5f0f94a688ba07a1c4960e00941dd39a-merged.mount: Deactivated successfully.
Dec 05 10:26:10 compute-0 ceph-mon[74418]: pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Dec 05 10:26:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:11 compute-0 nova_compute[257087]: 2025-12-05 10:26:11.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:11 compute-0 nova_compute[257087]: 2025-12-05 10:26:11.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:11 compute-0 podman[278082]: 2025-12-05 10:26:11.606107086 +0000 UTC m=+9.461252295 container remove b119cd3ba60e12da9828bdf71dcb6cbfb1acaa7b41260cd3abebbe4fbaa3ae7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:26:11 compute-0 systemd[1]: libpod-conmon-b119cd3ba60e12da9828bdf71dcb6cbfb1acaa7b41260cd3abebbe4fbaa3ae7b.scope: Deactivated successfully.
Dec 05 10:26:11 compute-0 sudo[277913]: pam_unix(sudo:session): session closed for user root
Dec 05 10:26:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:26:11 compute-0 nova_compute[257087]: 2025-12-05 10:26:11.711 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:26:11 compute-0 nova_compute[257087]: 2025-12-05 10:26:11.711 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:26:11 compute-0 nova_compute[257087]: 2025-12-05 10:26:11.711 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:26:11 compute-0 nova_compute[257087]: 2025-12-05 10:26:11.712 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:26:11 compute-0 nova_compute[257087]: 2025-12-05 10:26:11.712 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:26:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:26:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:26:12 compute-0 ceph-mon[74418]: pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:26:12 compute-0 ceph-mon[74418]: pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:12 compute-0 ceph-mon[74418]: pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:12 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2630885699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:26:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:12.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:26:12 compute-0 nova_compute[257087]: 2025-12-05 10:26:12.173 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:26:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399269210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:26:12 compute-0 nova_compute[257087]: 2025-12-05 10:26:12.241 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:26:12 compute-0 sudo[278221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:26:12 compute-0 sudo[278221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:26:12 compute-0 sudo[278221]: pam_unix(sudo:session): session closed for user root
Dec 05 10:26:12 compute-0 nova_compute[257087]: 2025-12-05 10:26:12.429 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:26:12 compute-0 nova_compute[257087]: 2025-12-05 10:26:12.432 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4587MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:26:12 compute-0 nova_compute[257087]: 2025-12-05 10:26:12.432 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:26:12 compute-0 nova_compute[257087]: 2025-12-05 10:26:12.432 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:26:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:12.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:26:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:26:12 compute-0 nova_compute[257087]: 2025-12-05 10:26:12.636 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:26:12 compute-0 nova_compute[257087]: 2025-12-05 10:26:12.637 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:26:12 compute-0 nova_compute[257087]: 2025-12-05 10:26:12.653 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:26:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:13 compute-0 ceph-mon[74418]: pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:26:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:26:13 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2394881427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:26:13 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3399269210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:26:13 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2681380258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:26:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:26:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:26:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/631344455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:26:13 compute-0 nova_compute[257087]: 2025-12-05 10:26:13.309 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:26:13 compute-0 nova_compute[257087]: 2025-12-05 10:26:13.315 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:26:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:13.767Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:14 compute-0 nova_compute[257087]: 2025-12-05 10:26:14.004 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:26:14 compute-0 nova_compute[257087]: 2025-12-05 10:26:14.007 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:26:14 compute-0 nova_compute[257087]: 2025-12-05 10:26:14.007 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:26:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:14.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:14.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/631344455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:26:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3588198362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:26:15 compute-0 nova_compute[257087]: 2025-12-05 10:26:15.006 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:15] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:26:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:15] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:26:15 compute-0 sudo[278274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:26:15 compute-0 sudo[278274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:26:15 compute-0 sudo[278274]: pam_unix(sudo:session): session closed for user root
Dec 05 10:26:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:16.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:16.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:16 compute-0 ceph-mon[74418]: pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:16 compute-0 nova_compute[257087]: 2025-12-05 10:26:16.933 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:16 compute-0 nova_compute[257087]: 2025-12-05 10:26:16.934 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:26:16 compute-0 nova_compute[257087]: 2025-12-05 10:26:16.934 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:26:17 compute-0 nova_compute[257087]: 2025-12-05 10:26:17.003 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:26:17 compute-0 nova_compute[257087]: 2025-12-05 10:26:17.003 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:17 compute-0 nova_compute[257087]: 2025-12-05 10:26:17.003 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:17 compute-0 nova_compute[257087]: 2025-12-05 10:26:17.003 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:26:17 compute-0 nova_compute[257087]: 2025-12-05 10:26:17.170 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:17 compute-0 nova_compute[257087]: 2025-12-05 10:26:17.177 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:17.473Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:18 compute-0 ceph-mon[74418]: pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:18.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:18.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:18.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:19 compute-0 ceph-mon[74418]: pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:26:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:20.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:20.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:26:20.584 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:26:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:26:20.585 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:26:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:26:20.586 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:26:21 compute-0 ceph-mon[74418]: pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:26:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:26:22 compute-0 nova_compute[257087]: 2025-12-05 10:26:22.179 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:26:22 compute-0 nova_compute[257087]: 2025-12-05 10:26:22.181 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:26:22 compute-0 nova_compute[257087]: 2025-12-05 10:26:22.181 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:26:22 compute-0 nova_compute[257087]: 2025-12-05 10:26:22.181 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:26:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:22.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:22 compute-0 nova_compute[257087]: 2025-12-05 10:26:22.189 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:22 compute-0 nova_compute[257087]: 2025-12-05 10:26:22.190 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:26:22 compute-0 ceph-mon[74418]: pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:26:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:22.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:23.768Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:26:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:24.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:24.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:24 compute-0 ceph-mon[74418]: pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:26:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:25] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:26:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:25] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:26:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:26:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:26.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:26.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:27 compute-0 ceph-mon[74418]: pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:26:27 compute-0 nova_compute[257087]: 2025-12-05 10:26:27.191 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:27.475Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:26:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:26:27
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', '.nfs', '.mgr', 'backups', 'vms']
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:26:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:26:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:26:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:26:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:26:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:28.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:26:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:26:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:28.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:28.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:29 compute-0 ceph-mon[74418]: pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:26:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:26:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:26:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:30.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:26:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:30.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:31 compute-0 ceph-mon[74418]: pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:26:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:32 compute-0 nova_compute[257087]: 2025-12-05 10:26:32.194 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:26:32 compute-0 nova_compute[257087]: 2025-12-05 10:26:32.196 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:26:32 compute-0 nova_compute[257087]: 2025-12-05 10:26:32.196 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:26:32 compute-0 nova_compute[257087]: 2025-12-05 10:26:32.197 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:26:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:32.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:32 compute-0 nova_compute[257087]: 2025-12-05 10:26:32.491 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:32 compute-0 nova_compute[257087]: 2025-12-05 10:26:32.492 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:26:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:32.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:32 compute-0 podman[278317]: 2025-12-05 10:26:32.625894564 +0000 UTC m=+0.083840022 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:26:32 compute-0 podman[278319]: 2025-12-05 10:26:32.625855973 +0000 UTC m=+0.083806651 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 10:26:32 compute-0 podman[278318]: 2025-12-05 10:26:32.655867579 +0000 UTC m=+0.113817787 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:26:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:33 compute-0 ceph-mon[74418]: pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:33.769Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:34.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:34.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:34 compute-0 ceph-mon[74418]: pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:35] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:26:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:35] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:26:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:35 compute-0 sudo[278383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:26:35 compute-0 sudo[278383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:26:35 compute-0 sudo[278383]: pam_unix(sudo:session): session closed for user root
Dec 05 10:26:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:36.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:37 compute-0 ceph-mon[74418]: pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:37.476Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:37 compute-0 nova_compute[257087]: 2025-12-05 10:26:37.493 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:26:37 compute-0 nova_compute[257087]: 2025-12-05 10:26:37.494 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:37 compute-0 nova_compute[257087]: 2025-12-05 10:26:37.495 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:26:37 compute-0 nova_compute[257087]: 2025-12-05 10:26:37.495 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:26:37 compute-0 nova_compute[257087]: 2025-12-05 10:26:37.496 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:26:37 compute-0 nova_compute[257087]: 2025-12-05 10:26:37.497 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:38.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:38.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:38.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:26:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:38.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:26:39 compute-0 ceph-mon[74418]: pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:40.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:41 compute-0 ceph-mon[74418]: pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:26:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:42.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:26:42 compute-0 nova_compute[257087]: 2025-12-05 10:26:42.494 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:42 compute-0 nova_compute[257087]: 2025-12-05 10:26:42.497 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:42.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:26:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:26:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:43.770Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:43 compute-0 ceph-mon[74418]: pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:26:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:44.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:44.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:45 compute-0 ceph-mon[74418]: pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:45] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:26:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:45] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:26:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:46.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:46.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:47 compute-0 ceph-mon[74418]: pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:47.477Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:47 compute-0 nova_compute[257087]: 2025-12-05 10:26:47.497 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:48.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:48 compute-0 ceph-mon[74418]: pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:26:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:48.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:26:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:48.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:26:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:48.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:26:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:50.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:50.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:51 compute-0 ceph-mon[74418]: pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:52.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:52 compute-0 nova_compute[257087]: 2025-12-05 10:26:52.499 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:26:52 compute-0 nova_compute[257087]: 2025-12-05 10:26:52.500 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:52 compute-0 nova_compute[257087]: 2025-12-05 10:26:52.500 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:26:52 compute-0 nova_compute[257087]: 2025-12-05 10:26:52.501 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:26:52 compute-0 nova_compute[257087]: 2025-12-05 10:26:52.501 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:26:52 compute-0 nova_compute[257087]: 2025-12-05 10:26:52.502 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:52 compute-0 ceph-mon[74418]: pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:52.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:53.771Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:54.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:54.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:55 compute-0 ceph-mon[74418]: pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:26:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:55] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:26:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:26:55] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:26:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:56 compute-0 sudo[278429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:26:56 compute-0 sudo[278429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:26:56 compute-0 sudo[278429]: pam_unix(sudo:session): session closed for user root
Dec 05 10:26:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:56.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:56.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:57 compute-0 ceph-mon[74418]: pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/82322053' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:26:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/82322053' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:26:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:57.479Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:26:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:57.480Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:57 compute-0 nova_compute[257087]: 2025-12-05 10:26:57.501 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:57 compute-0 nova_compute[257087]: 2025-12-05 10:26:57.503 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:26:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:26:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:26:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:26:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:26:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:26:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:26:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:26:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:26:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:26:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:26:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:26:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:26:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:26:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:26:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:26:58.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:26:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:26:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:26:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:26:58.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:26:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:26:58.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:26:59 compute-0 ceph-mon[74418]: pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:26:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:00.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:00 compute-0 ceph-mon[74418]: pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:00.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:27:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:02.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:27:02 compute-0 nova_compute[257087]: 2025-12-05 10:27:02.503 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:02.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:03 compute-0 podman[278462]: 2025-12-05 10:27:03.420042955 +0000 UTC m=+0.075016012 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:27:03 compute-0 podman[278464]: 2025-12-05 10:27:03.426283984 +0000 UTC m=+0.074241950 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:27:03 compute-0 ceph-mon[74418]: pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:03 compute-0 podman[278463]: 2025-12-05 10:27:03.456451485 +0000 UTC m=+0.110160837 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:27:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:03.772Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:04.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:04.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:05] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Dec 05 10:27:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:05] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Dec 05 10:27:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:06.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:06 compute-0 nova_compute[257087]: 2025-12-05 10:27:06.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:06.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:07.481Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:07 compute-0 nova_compute[257087]: 2025-12-05 10:27:07.506 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:27:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:08.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:08 compute-0 nova_compute[257087]: 2025-12-05 10:27:08.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:08.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:08.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:10.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:10 compute-0 nova_compute[257087]: 2025-12-05 10:27:10.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:10 compute-0 ceph-mon[74418]: pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:10.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:11 compute-0 nova_compute[257087]: 2025-12-05 10:27:11.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:11 compute-0 nova_compute[257087]: 2025-12-05 10:27:11.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:11 compute-0 nova_compute[257087]: 2025-12-05 10:27:11.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:11 compute-0 nova_compute[257087]: 2025-12-05 10:27:11.811 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:27:11 compute-0 nova_compute[257087]: 2025-12-05 10:27:11.812 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:27:11 compute-0 nova_compute[257087]: 2025-12-05 10:27:11.813 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:27:11 compute-0 nova_compute[257087]: 2025-12-05 10:27:11.813 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:27:11 compute-0 nova_compute[257087]: 2025-12-05 10:27:11.814 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:27:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:11 compute-0 ceph-mon[74418]: pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:11 compute-0 ceph-mon[74418]: pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:11 compute-0 ceph-mon[74418]: pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:27:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:27:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:27:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554819097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:27:12 compute-0 nova_compute[257087]: 2025-12-05 10:27:12.408 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:27:12 compute-0 nova_compute[257087]: 2025-12-05 10:27:12.508 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:12 compute-0 sudo[278559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:27:12 compute-0 sudo[278559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:12 compute-0 sudo[278559]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:27:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:27:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:12.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:12 compute-0 sudo[278584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:27:12 compute-0 sudo[278584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:12 compute-0 nova_compute[257087]: 2025-12-05 10:27:12.706 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:27:12 compute-0 nova_compute[257087]: 2025-12-05 10:27:12.707 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4606MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:27:12 compute-0 nova_compute[257087]: 2025-12-05 10:27:12.708 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:27:12 compute-0 nova_compute[257087]: 2025-12-05 10:27:12.708 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:27:12 compute-0 nova_compute[257087]: 2025-12-05 10:27:12.869 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:27:12 compute-0 nova_compute[257087]: 2025-12-05 10:27:12.870 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:27:12 compute-0 nova_compute[257087]: 2025-12-05 10:27:12.925 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:27:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 10:27:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 10:27:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:12 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3034375331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:27:12 compute-0 ceph-mon[74418]: pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:12 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1554819097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:27:12 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3121228430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:27:12 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/608039490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:27:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:27:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:13 compute-0 sudo[278584]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:27:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3834373150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:27:13 compute-0 nova_compute[257087]: 2025-12-05 10:27:13.436 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:27:13 compute-0 nova_compute[257087]: 2025-12-05 10:27:13.443 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:27:13 compute-0 nova_compute[257087]: 2025-12-05 10:27:13.468 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:27:13 compute-0 nova_compute[257087]: 2025-12-05 10:27:13.470 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:27:13 compute-0 nova_compute[257087]: 2025-12-05 10:27:13.470 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:27:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:27:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:27:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:27:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:27:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:27:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:27:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:27:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:27:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:27:13 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:27:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:27:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:27:13 compute-0 sshd-session[278664]: Accepted publickey for zuul from 192.168.122.10 port 58584 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 10:27:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:13.773Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:27:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:13.773Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:13 compute-0 systemd-logind[789]: New session 56 of user zuul.
Dec 05 10:27:13 compute-0 systemd[1]: Started Session 56 of User zuul.
Dec 05 10:27:13 compute-0 sshd-session[278664]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 10:27:13 compute-0 sudo[278666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:27:13 compute-0 sudo[278666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:13 compute-0 sudo[278666]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:13 compute-0 sudo[278693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:27:13 compute-0 sudo[278693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:13 compute-0 sudo[278705]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 05 10:27:13 compute-0 sudo[278705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:27:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3834373150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:27:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2611559788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:27:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:27:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:27:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:27:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:27:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:27:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:14.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:14 compute-0 podman[278794]: 2025-12-05 10:27:14.345439872 +0000 UTC m=+0.050428713 container create bff2d590f3e0f94a639fd4283c2bfd2c1bb89f7224a2286cc3d1593ee7b33be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 10:27:14 compute-0 podman[278794]: 2025-12-05 10:27:14.324191693 +0000 UTC m=+0.029180554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:27:14 compute-0 nova_compute[257087]: 2025-12-05 10:27:14.471 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:14 compute-0 nova_compute[257087]: 2025-12-05 10:27:14.472 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:14 compute-0 nova_compute[257087]: 2025-12-05 10:27:14.472 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:27:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:14.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:15 compute-0 ceph-mon[74418]: pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:15 compute-0 systemd[1]: Started libpod-conmon-bff2d590f3e0f94a639fd4283c2bfd2c1bb89f7224a2286cc3d1593ee7b33be4.scope.
Dec 05 10:27:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:27:15 compute-0 podman[278794]: 2025-12-05 10:27:15.117044262 +0000 UTC m=+0.822033123 container init bff2d590f3e0f94a639fd4283c2bfd2c1bb89f7224a2286cc3d1593ee7b33be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 10:27:15 compute-0 podman[278794]: 2025-12-05 10:27:15.12836143 +0000 UTC m=+0.833350281 container start bff2d590f3e0f94a639fd4283c2bfd2c1bb89f7224a2286cc3d1593ee7b33be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_shaw, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:27:15 compute-0 podman[278794]: 2025-12-05 10:27:15.132983656 +0000 UTC m=+0.837972517 container attach bff2d590f3e0f94a639fd4283c2bfd2c1bb89f7224a2286cc3d1593ee7b33be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_shaw, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 10:27:15 compute-0 strange_shaw[278814]: 167 167
Dec 05 10:27:15 compute-0 systemd[1]: libpod-bff2d590f3e0f94a639fd4283c2bfd2c1bb89f7224a2286cc3d1593ee7b33be4.scope: Deactivated successfully.
Dec 05 10:27:15 compute-0 podman[278794]: 2025-12-05 10:27:15.136600154 +0000 UTC m=+0.841589025 container died bff2d590f3e0f94a639fd4283c2bfd2c1bb89f7224a2286cc3d1593ee7b33be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f31c383223145b2c87d719967fa2bf21629ddbf2b081631eba43acd56dcda14-merged.mount: Deactivated successfully.
Dec 05 10:27:15 compute-0 podman[278794]: 2025-12-05 10:27:15.18644634 +0000 UTC m=+0.891435181 container remove bff2d590f3e0f94a639fd4283c2bfd2c1bb89f7224a2286cc3d1593ee7b33be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:27:15 compute-0 systemd[1]: libpod-conmon-bff2d590f3e0f94a639fd4283c2bfd2c1bb89f7224a2286cc3d1593ee7b33be4.scope: Deactivated successfully.
Dec 05 10:27:15 compute-0 podman[278869]: 2025-12-05 10:27:15.363049175 +0000 UTC m=+0.046113095 container create eb817ea77addd2d5754ae6170a1578386970ceab4eabbbf9eccee77b85551f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_elgamal, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:27:15 compute-0 systemd[1]: Started libpod-conmon-eb817ea77addd2d5754ae6170a1578386970ceab4eabbbf9eccee77b85551f18.scope.
Dec 05 10:27:15 compute-0 podman[278869]: 2025-12-05 10:27:15.345644731 +0000 UTC m=+0.028708681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:27:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3a49e9ddbc1f7a8703f113357eaa5170ea8c5092a7a58dfae012b01dc6a98e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3a49e9ddbc1f7a8703f113357eaa5170ea8c5092a7a58dfae012b01dc6a98e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3a49e9ddbc1f7a8703f113357eaa5170ea8c5092a7a58dfae012b01dc6a98e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3a49e9ddbc1f7a8703f113357eaa5170ea8c5092a7a58dfae012b01dc6a98e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3a49e9ddbc1f7a8703f113357eaa5170ea8c5092a7a58dfae012b01dc6a98e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:15 compute-0 podman[278869]: 2025-12-05 10:27:15.478009942 +0000 UTC m=+0.161073882 container init eb817ea77addd2d5754ae6170a1578386970ceab4eabbbf9eccee77b85551f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_elgamal, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:27:15 compute-0 podman[278869]: 2025-12-05 10:27:15.486081282 +0000 UTC m=+0.169145202 container start eb817ea77addd2d5754ae6170a1578386970ceab4eabbbf9eccee77b85551f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_elgamal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:27:15 compute-0 podman[278869]: 2025-12-05 10:27:15.49040557 +0000 UTC m=+0.173469490 container attach eb817ea77addd2d5754ae6170a1578386970ceab4eabbbf9eccee77b85551f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_elgamal, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 10:27:15 compute-0 nova_compute[257087]: 2025-12-05 10:27:15.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:15 compute-0 nova_compute[257087]: 2025-12-05 10:27:15.532 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:27:15 compute-0 nova_compute[257087]: 2025-12-05 10:27:15.532 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:27:15 compute-0 nova_compute[257087]: 2025-12-05 10:27:15.549 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:27:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:15] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Dec 05 10:27:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:15] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Dec 05 10:27:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:27:15 compute-0 zen_elgamal[278897]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:27:15 compute-0 zen_elgamal[278897]: --> All data devices are unavailable
Dec 05 10:27:15 compute-0 systemd[1]: libpod-eb817ea77addd2d5754ae6170a1578386970ceab4eabbbf9eccee77b85551f18.scope: Deactivated successfully.
Dec 05 10:27:15 compute-0 podman[278869]: 2025-12-05 10:27:15.925504496 +0000 UTC m=+0.608568446 container died eb817ea77addd2d5754ae6170a1578386970ceab4eabbbf9eccee77b85551f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d3a49e9ddbc1f7a8703f113357eaa5170ea8c5092a7a58dfae012b01dc6a98e-merged.mount: Deactivated successfully.
Dec 05 10:27:15 compute-0 podman[278869]: 2025-12-05 10:27:15.983755311 +0000 UTC m=+0.666819231 container remove eb817ea77addd2d5754ae6170a1578386970ceab4eabbbf9eccee77b85551f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_elgamal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:27:15 compute-0 systemd[1]: libpod-conmon-eb817ea77addd2d5754ae6170a1578386970ceab4eabbbf9eccee77b85551f18.scope: Deactivated successfully.
Dec 05 10:27:16 compute-0 sudo[278693]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:16 compute-0 sudo[278951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:27:16 compute-0 sudo[278951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:16 compute-0 sudo[278951]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:16 compute-0 sudo[278959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:27:16 compute-0 sudo[278959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:16 compute-0 sudo[278959]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:16 compute-0 sudo[279000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:27:16 compute-0 sudo[279000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:16.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.434050) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930436434202, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1173, "num_deletes": 252, "total_data_size": 2151559, "memory_usage": 2179216, "flush_reason": "Manual Compaction"}
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930436460387, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2094803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31788, "largest_seqno": 32960, "table_properties": {"data_size": 2088990, "index_size": 3144, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11537, "raw_average_key_size": 18, "raw_value_size": 2077430, "raw_average_value_size": 3356, "num_data_blocks": 134, "num_entries": 619, "num_filter_entries": 619, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930326, "oldest_key_time": 1764930326, "file_creation_time": 1764930436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 26418 microseconds, and 8701 cpu microseconds.
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.460474) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2094803 bytes OK
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.460513) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.468427) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.468484) EVENT_LOG_v1 {"time_micros": 1764930436468472, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.468561) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2146240, prev total WAL file size 2146240, number of live WAL files 2.
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.470388) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323537' seq:72057594037927935, type:22 .. '6B7600353130' seq:0, type:0; will stop at (end)
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2045KB)], [68(14MB)]
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930436470692, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 17077552, "oldest_snapshot_seqno": -1}
Dec 05 10:27:16 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.25916 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:16 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26308 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:16.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6486 keys, 15666457 bytes, temperature: kUnknown
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930436698730, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 15666457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15621734, "index_size": 27397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 167707, "raw_average_key_size": 25, "raw_value_size": 15503401, "raw_average_value_size": 2390, "num_data_blocks": 1081, "num_entries": 6486, "num_filter_entries": 6486, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764930436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.699083) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 15666457 bytes
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.701139) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.9 rd, 68.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 14.3 +0.0 blob) out(14.9 +0.0 blob), read-write-amplify(15.6) write-amplify(7.5) OK, records in: 7008, records dropped: 522 output_compression: NoCompression
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.701198) EVENT_LOG_v1 {"time_micros": 1764930436701177, "job": 38, "event": "compaction_finished", "compaction_time_micros": 228153, "compaction_time_cpu_micros": 63317, "output_level": 6, "num_output_files": 1, "total_output_size": 15666457, "num_input_records": 7008, "num_output_records": 6486, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930436701914, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec 05 10:27:16 compute-0 podman[279113]: 2025-12-05 10:27:16.702169834 +0000 UTC m=+0.072964776 container create fbfa51f8bb05cc7c2d2e3c8ce6abd5e2082defa68f9ec16124297c6769a814bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930436705465, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.470143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.705586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.705596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.705598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.705600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:27:16 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:27:16.705601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:27:16 compute-0 podman[279113]: 2025-12-05 10:27:16.654094107 +0000 UTC m=+0.024889069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:27:16 compute-0 systemd[1]: Started libpod-conmon-fbfa51f8bb05cc7c2d2e3c8ce6abd5e2082defa68f9ec16124297c6769a814bd.scope.
Dec 05 10:27:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:27:16 compute-0 podman[279113]: 2025-12-05 10:27:16.815039856 +0000 UTC m=+0.185834808 container init fbfa51f8bb05cc7c2d2e3c8ce6abd5e2082defa68f9ec16124297c6769a814bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 05 10:27:16 compute-0 podman[279113]: 2025-12-05 10:27:16.822803256 +0000 UTC m=+0.193598198 container start fbfa51f8bb05cc7c2d2e3c8ce6abd5e2082defa68f9ec16124297c6769a814bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:27:16 compute-0 podman[279113]: 2025-12-05 10:27:16.826126517 +0000 UTC m=+0.196921459 container attach fbfa51f8bb05cc7c2d2e3c8ce6abd5e2082defa68f9ec16124297c6769a814bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 10:27:16 compute-0 boring_booth[279148]: 167 167
Dec 05 10:27:16 compute-0 systemd[1]: libpod-fbfa51f8bb05cc7c2d2e3c8ce6abd5e2082defa68f9ec16124297c6769a814bd.scope: Deactivated successfully.
Dec 05 10:27:16 compute-0 podman[279113]: 2025-12-05 10:27:16.83249751 +0000 UTC m=+0.203292442 container died fbfa51f8bb05cc7c2d2e3c8ce6abd5e2082defa68f9ec16124297c6769a814bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-096e16a0733abf961379c991e96c68f35c5080c483046fe01c6e3a57de498aa3-merged.mount: Deactivated successfully.
Dec 05 10:27:16 compute-0 podman[279113]: 2025-12-05 10:27:16.877580207 +0000 UTC m=+0.248375139 container remove fbfa51f8bb05cc7c2d2e3c8ce6abd5e2082defa68f9ec16124297c6769a814bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:27:16 compute-0 systemd[1]: libpod-conmon-fbfa51f8bb05cc7c2d2e3c8ce6abd5e2082defa68f9ec16124297c6769a814bd.scope: Deactivated successfully.
Dec 05 10:27:16 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26320 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:17 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16101 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:17 compute-0 ceph-mon[74418]: pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:27:17 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.25931 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:17 compute-0 podman[279171]: 2025-12-05 10:27:17.065359025 +0000 UTC m=+0.054786292 container create e71b1730c15b4aef7470e7a0901e74c1bdaeacfeee2d67988604915abc3a487e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:27:17 compute-0 systemd[1]: Started libpod-conmon-e71b1730c15b4aef7470e7a0901e74c1bdaeacfeee2d67988604915abc3a487e.scope.
Dec 05 10:27:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73147d53f55450c9c0c7ab77356439fa256e7e44ad580cf4c4b80a2f1d58b400/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73147d53f55450c9c0c7ab77356439fa256e7e44ad580cf4c4b80a2f1d58b400/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73147d53f55450c9c0c7ab77356439fa256e7e44ad580cf4c4b80a2f1d58b400/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73147d53f55450c9c0c7ab77356439fa256e7e44ad580cf4c4b80a2f1d58b400/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:17 compute-0 podman[279171]: 2025-12-05 10:27:17.04349586 +0000 UTC m=+0.032923157 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:27:17 compute-0 podman[279171]: 2025-12-05 10:27:17.149587736 +0000 UTC m=+0.139015003 container init e71b1730c15b4aef7470e7a0901e74c1bdaeacfeee2d67988604915abc3a487e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_joliot, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:27:17 compute-0 podman[279171]: 2025-12-05 10:27:17.157400459 +0000 UTC m=+0.146827726 container start e71b1730c15b4aef7470e7a0901e74c1bdaeacfeee2d67988604915abc3a487e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_joliot, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 05 10:27:17 compute-0 podman[279171]: 2025-12-05 10:27:17.161712156 +0000 UTC m=+0.151139423 container attach e71b1730c15b4aef7470e7a0901e74c1bdaeacfeee2d67988604915abc3a487e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]: {
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:     "1": [
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:         {
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             "devices": [
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "/dev/loop3"
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             ],
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             "lv_name": "ceph_lv0",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             "lv_size": "21470642176",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             "name": "ceph_lv0",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             "tags": {
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.cluster_name": "ceph",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.crush_device_class": "",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.encrypted": "0",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.osd_id": "1",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.type": "block",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.vdo": "0",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:                 "ceph.with_tpm": "0"
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             },
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             "type": "block",
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:             "vg_name": "ceph_vg0"
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:         }
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]:     ]
Dec 05 10:27:17 compute-0 beautiful_joliot[279193]: }
Dec 05 10:27:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:17.482Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:17 compute-0 nova_compute[257087]: 2025-12-05 10:27:17.510 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:17 compute-0 systemd[1]: libpod-e71b1730c15b4aef7470e7a0901e74c1bdaeacfeee2d67988604915abc3a487e.scope: Deactivated successfully.
Dec 05 10:27:17 compute-0 podman[279171]: 2025-12-05 10:27:17.516870898 +0000 UTC m=+0.506298185 container died e71b1730c15b4aef7470e7a0901e74c1bdaeacfeee2d67988604915abc3a487e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-73147d53f55450c9c0c7ab77356439fa256e7e44ad580cf4c4b80a2f1d58b400-merged.mount: Deactivated successfully.
Dec 05 10:27:17 compute-0 podman[279171]: 2025-12-05 10:27:17.57321182 +0000 UTC m=+0.562639087 container remove e71b1730c15b4aef7470e7a0901e74c1bdaeacfeee2d67988604915abc3a487e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_joliot, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 10:27:17 compute-0 systemd[1]: libpod-conmon-e71b1730c15b4aef7470e7a0901e74c1bdaeacfeee2d67988604915abc3a487e.scope: Deactivated successfully.
Dec 05 10:27:17 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16122 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:17 compute-0 sudo[279000]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:27:17 compute-0 sudo[279240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:27:17 compute-0 sudo[279240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:17 compute-0 sudo[279240]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:17 compute-0 sudo[279265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:27:17 compute-0 sudo[279265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:18 compute-0 ceph-mon[74418]: from='client.25916 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:18 compute-0 ceph-mon[74418]: from='client.26308 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:18 compute-0 ceph-mon[74418]: from='client.26320 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:18 compute-0 ceph-mon[74418]: from='client.16101 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:18 compute-0 ceph-mon[74418]: from='client.25931 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3482218533' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:27:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1368570681' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:27:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec 05 10:27:18 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1137275178' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:27:18 compute-0 podman[279352]: 2025-12-05 10:27:18.24992967 +0000 UTC m=+0.050582767 container create 8d4389a8949c369826968e7c52e8eabe67406798977f4c5856bd72187514aafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 10:27:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:18.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:18 compute-0 systemd[1]: Started libpod-conmon-8d4389a8949c369826968e7c52e8eabe67406798977f4c5856bd72187514aafc.scope.
Dec 05 10:27:18 compute-0 podman[279352]: 2025-12-05 10:27:18.230744398 +0000 UTC m=+0.031397515 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:27:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:27:18 compute-0 podman[279352]: 2025-12-05 10:27:18.361082043 +0000 UTC m=+0.161735160 container init 8d4389a8949c369826968e7c52e8eabe67406798977f4c5856bd72187514aafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:27:18 compute-0 podman[279352]: 2025-12-05 10:27:18.370155681 +0000 UTC m=+0.170808768 container start 8d4389a8949c369826968e7c52e8eabe67406798977f4c5856bd72187514aafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 10:27:18 compute-0 podman[279352]: 2025-12-05 10:27:18.374326614 +0000 UTC m=+0.174979721 container attach 8d4389a8949c369826968e7c52e8eabe67406798977f4c5856bd72187514aafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 05 10:27:18 compute-0 agitated_poitras[279383]: 167 167
Dec 05 10:27:18 compute-0 systemd[1]: libpod-8d4389a8949c369826968e7c52e8eabe67406798977f4c5856bd72187514aafc.scope: Deactivated successfully.
Dec 05 10:27:18 compute-0 podman[279352]: 2025-12-05 10:27:18.377342216 +0000 UTC m=+0.177995323 container died 8d4389a8949c369826968e7c52e8eabe67406798977f4c5856bd72187514aafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cca8f7f38329b0b46cbc179d6010f8f6fcdf53b207f3a4fb3faaea485ebcbc66-merged.mount: Deactivated successfully.
Dec 05 10:27:18 compute-0 podman[279352]: 2025-12-05 10:27:18.422480924 +0000 UTC m=+0.223134011 container remove 8d4389a8949c369826968e7c52e8eabe67406798977f4c5856bd72187514aafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:27:18 compute-0 systemd[1]: libpod-conmon-8d4389a8949c369826968e7c52e8eabe67406798977f4c5856bd72187514aafc.scope: Deactivated successfully.
Dec 05 10:27:18 compute-0 podman[279415]: 2025-12-05 10:27:18.616173324 +0000 UTC m=+0.054829873 container create c86a76aafaea6576629a5e6d8f429e4eb7f7af1cca25c4025888750b3e8a8ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:27:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:18.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:18 compute-0 systemd[1]: Started libpod-conmon-c86a76aafaea6576629a5e6d8f429e4eb7f7af1cca25c4025888750b3e8a8ee7.scope.
Dec 05 10:27:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:27:18 compute-0 podman[279415]: 2025-12-05 10:27:18.59104158 +0000 UTC m=+0.029698149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:27:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4418d75c2da0bb1805df6ab1fc9e1da14f520127fce05ec7cdfffe7cfdc1ec64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4418d75c2da0bb1805df6ab1fc9e1da14f520127fce05ec7cdfffe7cfdc1ec64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4418d75c2da0bb1805df6ab1fc9e1da14f520127fce05ec7cdfffe7cfdc1ec64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4418d75c2da0bb1805df6ab1fc9e1da14f520127fce05ec7cdfffe7cfdc1ec64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:18 compute-0 podman[279415]: 2025-12-05 10:27:18.715694391 +0000 UTC m=+0.154350940 container init c86a76aafaea6576629a5e6d8f429e4eb7f7af1cca25c4025888750b3e8a8ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 10:27:18 compute-0 podman[279415]: 2025-12-05 10:27:18.72299991 +0000 UTC m=+0.161656459 container start c86a76aafaea6576629a5e6d8f429e4eb7f7af1cca25c4025888750b3e8a8ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 10:27:18 compute-0 podman[279415]: 2025-12-05 10:27:18.72669236 +0000 UTC m=+0.165348909 container attach c86a76aafaea6576629a5e6d8f429e4eb7f7af1cca25c4025888750b3e8a8ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec 05 10:27:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:18.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:27:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:18.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:27:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:18.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:19 compute-0 ceph-mon[74418]: from='client.16122 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:19 compute-0 ceph-mon[74418]: pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:27:19 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1137275178' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:27:19 compute-0 lvm[279533]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:27:19 compute-0 lvm[279533]: VG ceph_vg0 finished
Dec 05 10:27:19 compute-0 gracious_almeida[279431]: {}
Dec 05 10:27:19 compute-0 systemd[1]: libpod-c86a76aafaea6576629a5e6d8f429e4eb7f7af1cca25c4025888750b3e8a8ee7.scope: Deactivated successfully.
Dec 05 10:27:19 compute-0 systemd[1]: libpod-c86a76aafaea6576629a5e6d8f429e4eb7f7af1cca25c4025888750b3e8a8ee7.scope: Consumed 1.289s CPU time.
Dec 05 10:27:19 compute-0 podman[279415]: 2025-12-05 10:27:19.522975312 +0000 UTC m=+0.961631871 container died c86a76aafaea6576629a5e6d8f429e4eb7f7af1cca25c4025888750b3e8a8ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 10:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4418d75c2da0bb1805df6ab1fc9e1da14f520127fce05ec7cdfffe7cfdc1ec64-merged.mount: Deactivated successfully.
Dec 05 10:27:19 compute-0 podman[279415]: 2025-12-05 10:27:19.580740894 +0000 UTC m=+1.019397463 container remove c86a76aafaea6576629a5e6d8f429e4eb7f7af1cca25c4025888750b3e8a8ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 10:27:19 compute-0 systemd[1]: libpod-conmon-c86a76aafaea6576629a5e6d8f429e4eb7f7af1cca25c4025888750b3e8a8ee7.scope: Deactivated successfully.
Dec 05 10:27:19 compute-0 sudo[279265]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:27:19 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:27:19 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:19 compute-0 sudo[279557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:27:19 compute-0 sudo[279557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:19 compute-0 sudo[279557]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:27:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:20.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:27:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:27:20.586 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:27:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:27:20.587 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:27:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:27:20.587 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:27:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:27:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:20.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:27:20 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:20 compute-0 ceph-mon[74418]: pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:20 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:27:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:27:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:22.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:22 compute-0 nova_compute[257087]: 2025-12-05 10:27:22.514 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:22.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:23.775Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:24.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:24.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:24 compute-0 ceph-mon[74418]: pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:27:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:25] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Dec 05 10:27:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:25] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Dec 05 10:27:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:25 compute-0 ovs-vsctl[279653]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 05 10:27:26 compute-0 ceph-mon[74418]: pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:26.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:26 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26335 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 05 10:27:26 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:27:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:26.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:26 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26353 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:26 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.25949 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:27 compute-0 virtqemud[256610]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 05 10:27:27 compute-0 virtqemud[256610]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 05 10:27:27 compute-0 virtqemud[256610]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26365 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.25961 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:27 compute-0 ceph-mon[74418]: pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:27 compute-0 ceph-mon[74418]: from='client.26335 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:27 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1286524563' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:27:27 compute-0 ceph-mon[74418]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:27:27 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1471998509' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:27:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 05 10:27:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:27:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:27.487Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:27 compute-0 nova_compute[257087]: 2025-12-05 10:27:27.513 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:27 compute-0 nova_compute[257087]: 2025-12-05 10:27:27.516 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26383 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:27:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.25973 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:27:27
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'vms', 'default.rgw.control', '.nfs', 'images']
Dec 05 10:27:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:27:27 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: cache status {prefix=cache status} (starting...)
Dec 05 10:27:27 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:27 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: client ls {prefix=client ls} (starting...)
Dec 05 10:27:27 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.25991 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:27:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:28.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26410 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: damage ls {prefix=damage ls} (starting...)
Dec 05 10:27:28 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:28.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26425 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.26353 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.25949 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.26365 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.25961 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/973640314' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2362471132' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.26383 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.25973 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2411411600' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2333217659' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3931332058' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.25991 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2872807427' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 10:27:28 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump loads {prefix=dump loads} (starting...)
Dec 05 10:27:28 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:28 compute-0 lvm[280085]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:27:28 compute-0 lvm[280085]: VG ceph_vg0 finished
Dec 05 10:27:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:28.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:28 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 05 10:27:28 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:28 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16176 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 05 10:27:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 05 10:27:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 05 10:27:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:29 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26030 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 05 10:27:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:29 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26036 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Dec 05 10:27:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 05 10:27:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 05 10:27:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/327694266' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26054 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.26410 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2912577323' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/811159749' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.26425 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1664831714' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.16176 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4059841852' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1351056103' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2607613530' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.26030 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1202270711' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2749481844' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/922707894' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/327694266' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:27:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 05 10:27:29 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:29 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16206 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26485 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:30 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T10:27:30.066+0000 7f687e376640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:27:30 compute-0 ceph-mgr[74711]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:27:30 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: ops {prefix=ops} (starting...)
Dec 05 10:27:30 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:27:30 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696330229' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 05 10:27:30 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1574455180' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:27:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:27:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:30.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:27:30 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16233 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:30.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec 05 10:27:30 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954109735' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.26036 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.26054 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3322883372' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.16206 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.26485 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1012814225' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/696330229' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1574455180' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3895191634' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2257899342' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/41713479' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/954109735' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2515696517' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1140222672' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:27:30 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 05 10:27:30 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776966295' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: session ls {prefix=session ls} (starting...)
Dec 05 10:27:31 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:27:31 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26117 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:31 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T10:27:31.127+0000 7f687e376640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:27:31 compute-0 ceph-mgr[74711]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:27:31 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: status {prefix=status} (starting...)
Dec 05 10:27:31 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26539 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 05 10:27:31 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3920043510' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 05 10:27:31 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2061308847' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:31 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26557 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Dec 05 10:27:31 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16290 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: from='client.16233 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/776966295' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2195524911' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3540364407' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: from='client.26117 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: from='client.26539 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3920043510' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2061308847' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/257801866' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1811557251' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2896858663' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 10:27:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 05 10:27:31 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941359237' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26572 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26147 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16305 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:32.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 05 10:27:32 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2379926620' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26590 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:32 compute-0 nova_compute[257087]: 2025-12-05 10:27:32.516 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:32 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26165 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:32.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 05 10:27:32 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2973637048' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.26557 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.16290 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/941359237' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.26572 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1464989190' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.26147 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3003592358' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.16305 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2379926620' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3304313948' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3716975967' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2973637048' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26611 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:32 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26186 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 05 10:27:33 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089782674' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26632 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26201 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 05 10:27:33 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1008365138' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 0 B/s wr, 98 op/s
Dec 05 10:27:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:33.777Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:33 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16359 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T10:27:33.786+0000 7f687e376640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:27:33 compute-0 ceph-mgr[74711]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:27:33 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26644 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.26590 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.26165 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1533353651' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3463845943' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.26611 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.26186 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1308082754' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2286825170' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4089782674' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1008365138' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3799108744' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3164580592' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26219 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 05 10:27:33 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3944397483' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:27:34 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26659 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:34 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26243 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:34.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 05 10:27:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241870581' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 10:27:34 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16386 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:34 compute-0 podman[280776]: 2025-12-05 10:27:34.504213093 +0000 UTC m=+0.153735373 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:27:34 compute-0 podman[280780]: 2025-12-05 10:27:34.509396464 +0000 UTC m=+0.148521841 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:27:34 compute-0 podman[280778]: 2025-12-05 10:27:34.525974485 +0000 UTC m=+0.165363779 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 05 10:27:34 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26668 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:34.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:34 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26261 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:34 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16404 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 05 10:27:34 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93246847' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 10:27:34 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26680 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.26632 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.26201 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 0 B/s wr, 98 op/s
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.16359 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.26644 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.26219 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3944397483' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3162337254' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.26659 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.26243 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2241870581' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3816896948' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2540592096' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26276 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16416 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940035 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:35.739742+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:36.740015+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:37.740206+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:38.740451+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 3923968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:39.740717+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 3915776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939903 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:40.740925+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 3899392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:41.741189+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 3891200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:42.741370+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 3891200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:43.741589+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 3883008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:44.741749+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 3883008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939903 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:45.741876+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 3874816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:46.742064+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 3874816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:47.742187+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 3874816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:48.742292+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:49.742430+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 3874816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939903 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:50.742560+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 3874816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:51.742830+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:52.743027+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:53.743216+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 3858432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:54.743518+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 3858432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939903 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:55.743679+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 3858432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:56.743860+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 3850240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:57.744001+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 3850240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:58.744160+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 3842048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:54:59.744304+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 3842048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939903 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:00.744437+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 3842048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:01.744701+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 3833856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:02.744939+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 3833856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f23e89800 session 0x563f26398780
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f25d58800 session 0x563f23eacd20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:03.745082+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 3833856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:04.745337+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 3817472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939903 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:05.745501+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 3817472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:06.745702+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 3809280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Cumulative writes: 8370 writes, 33K keys, 8370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 8370 writes, 1809 syncs, 4.63 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8370 writes, 33K keys, 8370 commit groups, 1.0 writes per commit group, ingest: 20.82 MB, 0.03 MB/s
                                           Interval WAL: 8370 writes, 1809 syncs, 4.63 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.8 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:07.745882+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 3743744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:08.746037+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 3735552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:09.746281+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 3735552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939903 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:10.746462+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 3735552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:11.746646+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 3727360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:12.746766+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 3727360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:13.746985+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 3719168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26362c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.847229004s of 43.868213654s, submitted: 4
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:14.747187+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 3710976 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940035 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:15.747336+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 3710976 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:16.747470+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 3702784 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:17.747613+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 3702784 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:18.747715+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 3694592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:19.747857+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 3694592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941547 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:20.747988+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 3694592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:21.748201+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 3686400 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:22.748315+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 3686400 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:23.748479+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 3678208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:24.748641+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 3678208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942468 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:25.748783+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 3678208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:26.748922+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 3670016 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.079730988s of 13.170050621s, submitted: 4
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:27.749083+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 3670016 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:28.749291+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:29.749424+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:30.749598+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:31.749815+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 3653632 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:32.750030+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 3653632 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:33.750199+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 3645440 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:34.750386+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 3637248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:35.750545+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 3637248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:36.750674+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 3629056 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:37.750821+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 3629056 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:38.750963+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 3620864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:39.751112+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 3612672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:40.751267+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 3612672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:41.751458+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 3604480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:42.751669+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 3604480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:43.751881+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 3596288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:44.752058+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 3596288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:45.752324+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 3596288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:46.752454+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 3588096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:47.752628+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 3588096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:48.752742+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 3579904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:49.752905+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 3579904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:50.753058+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 3571712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:51.753223+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 3571712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:52.753462+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 3571712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:53.753603+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 3563520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:54.753745+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 3563520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:55.753939+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 3555328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:56.754107+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 3555328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:57.754323+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 3555328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:58.754492+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 3547136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:55:59.754607+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 3538944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:00.754739+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 3530752 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:01.754963+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 3530752 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:02.755164+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 3530752 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:03.755338+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 3522560 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:04.755483+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 3522560 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:05.755689+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 3514368 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:06.755912+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 3514368 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:07.756082+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 3514368 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:08.756274+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 3506176 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:09.756390+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 3506176 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:10.756510+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 3497984 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:11.756692+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 3497984 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:12.756840+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 3489792 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:13.756964+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 3489792 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:14.757105+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 3489792 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:15.757300+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 3481600 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:16.757476+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 3481600 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:17.757665+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 3473408 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:18.757839+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 3473408 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:19.758008+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 3473408 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:20.758133+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 3465216 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:21.758363+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 3465216 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:22.758628+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 3465216 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:23.758789+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 3457024 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:24.758919+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 3457024 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:25.759046+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 3448832 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:26.759198+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 3448832 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:27.759325+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 3448832 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 05 10:27:35 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275662628' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26707 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:28.759489+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 3440640 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:29.759615+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 3432448 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:30.759754+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 3424256 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:31.759966+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 3424256 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:32.760102+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 3416064 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:33.760269+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 3416064 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:34.760413+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 3416064 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:35.760554+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 3407872 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:36.760677+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 3407872 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:37.760865+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 3399680 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:38.761064+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 3391488 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:39.761205+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 3391488 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:40.761404+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 3383296 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:41.761692+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 3383296 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:42.761894+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 3383296 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:43.762039+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 3375104 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:44.762281+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 3375104 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:45.762458+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 3366912 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:46.762585+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 3366912 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:47.762735+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 3366912 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:48.762919+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 3358720 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:49.763078+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 3358720 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:50.763305+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 3350528 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:51.765937+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 3350528 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:52.766114+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 3342336 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:53.766302+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 86.825019836s of 86.828918457s, submitted: 1
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 3342336 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:54.766419+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 3252224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:55.766656+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 3104768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:56.766783+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 3088384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:57.767114+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 3088384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:58.767627+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:56:59.767786+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:00.767947+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f25ea2c00 session 0x563f2707c000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f23e89400 session 0x563f23ff7a40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:01.768365+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:02.768491+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:03.768632+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:04.768832+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:05.768963+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:06.769091+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:07.769381+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:08.769667+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:09.769792+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:10.769902+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942336 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:11.770104+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:12.770260+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:13.770411+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.586795807s of 19.987558365s, submitted: 210
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:14.770748+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:15.770891+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942468 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:16.771013+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:17.771284+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:18.771424+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:19.771633+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d58800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:20.771859+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943980 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:21.772122+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:22.772278+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:23.772465+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:24.772590+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.086940765s of 10.614556313s, submitted: 2
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:25.772880+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:26.773027+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:27.773682+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:28.773837+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:29.774335+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:30.775083+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:31.775448+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:32.775977+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:33.776168+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:34.776359+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:35.776481+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:36.788518+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:37.788662+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:38.788786+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:39.788996+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:40.789274+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:41.810379+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:42.810757+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:43.810981+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:44.811165+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:45.811395+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.550273895s of 21.562067032s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:46.811639+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:47.812033+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 3047424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:48.812294+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 2965504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:49.812444+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:50.812583+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:51.812786+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:52.812971+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:53.813161+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:54.813345+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:55.813511+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:56.813658+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:57.813789+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:58.813972+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:57:59.814092+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 2940928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:00.814211+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 2940928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:01.814388+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 2940928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:02.814523+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 2940928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:03.814656+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 2940928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:04.814832+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 2932736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:05.815014+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 2932736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:06.815159+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 2932736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:07.815308+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 2932736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:08.815455+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:09.815629+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:10.815800+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:11.815980+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:12.816109+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:13.816313+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:14.816574+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:15.816716+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:16.816836+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:17.816960+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:18.817182+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:19.817416+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:20.817595+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:21.817851+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:22.817963+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:23.818090+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:24.818210+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:25.818352+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:26.818484+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:27.818626+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:28.818739+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:29.818845+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:30.818967+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:31.819146+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:32.819389+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:33.819580+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:34.819744+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:35.819869+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:36.820167+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:37.820298+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:38.820471+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:39.820605+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f25d1fc00 session 0x563f249e9a40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f26362c00 session 0x563f270843c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:40.820799+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:41.821008+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:42.821162+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:43.821327+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:44.821543+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:45.821704+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942666 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:46.821883+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:47.822501+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:48.822780+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:49.822989+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:50.823363+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 2908160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.418399811s of 64.801803589s, submitted: 118
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:51.823593+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 2899968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:52.823820+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 2899968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:53.824141+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 2899968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:54.824421+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 2899968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:55.824588+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 2899968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944310 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:56.824747+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 2899968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f2631f400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,0,1,0,0,0,0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:57.825052+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 2899968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:58.825246+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 2899968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:58:59.825556+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 2899968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:00.825721+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 2891776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945822 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:01.826173+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 2891776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:02.826584+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 2891776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:03.826844+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.109941483s of 12.457413673s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 2891776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:04.827175+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 2883584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:05.827311+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 2883584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f25ea2c00 session 0x563f27085860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f2631f400 session 0x563f2675f860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945099 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:06.827449+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 2883584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:07.827570+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f24c9b800 session 0x563f2615bc20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f23e89c00 session 0x563f270841e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 2883584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:08.827848+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 2883584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:09.827978+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 2883584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:10.828156+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 2883584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945099 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:11.828285+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 2883584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:12.828505+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 2883584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:13.828634+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 2875392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:14.828860+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 2875392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:15.829249+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 2875392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945099 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:16.829521+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 2875392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:17.829717+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.605988503s of 14.192867279s, submitted: 2
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 2875392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:18.829861+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 2875392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:19.830023+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 2875392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:20.830155+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f25d58800 session 0x563f2701eb40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f23e89800 session 0x563f2701e3c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 2875392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945363 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:21.830287+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 2875392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:22.830410+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 2875392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:23.830571+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f2631f400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 2867200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:24.830732+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26362c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 2867200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:25.830905+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 2867200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:26.831137+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946875 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 2867200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:27.831354+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 2867200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:28.831529+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 2867200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:29.831732+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 2867200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:30.831866+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.173900604s of 13.341024399s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 2867200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:31.832045+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946743 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:32.832222+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:33.832420+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:34.833142+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:35.833539+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:36.833695+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946743 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:37.833830+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:38.834404+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:39.835001+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:40.835372+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:41.836033+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946743 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.285324097s of 10.614171982s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:42.836312+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:43.836627+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:44.836907+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:45.837074+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:46.837475+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:47.837733+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:48.837893+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:49.838056+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:50.838187+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:51.838430+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:52.838622+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:53.838767+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:54.838902+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:55.839022+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:56.839218+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:57.839427+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:58.839565+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T09:59:59.839786+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread fragmentation_score=0.000028 took=0.000206s
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:00.839938+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:01.840095+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:02.840284+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:03.840517+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:04.840720+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:05.840874+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:06.841037+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:07.841176+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:08.841325+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:09.841474+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:10.841610+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:11.841864+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 2859008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:12.842020+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:13.842190+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:14.842309+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:15.842437+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:16.857585+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:17.857741+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:18.858033+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:19.858194+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:20.858335+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:21.858525+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:22.858643+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:23.858795+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:24.858947+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:25.859115+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:26.859368+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:27.859670+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:28.859798+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:29.859963+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:30.860081+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:31.860332+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:32.860485+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:33.860675+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:34.860803+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 2850816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:35.860928+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:36.861045+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:37.861161+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:38.861397+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:39.861519+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271fe000 session 0x563f2701e1e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:40.861678+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:41.861875+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:42.862037+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:43.862195+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:44.862293+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:45.862440+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:46.862573+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:47.862697+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:48.862815+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:49.862919+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:50.863034+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 68.868896484s of 68.886718750s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:51.863171+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:52.863281+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 2842624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:53.863395+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 2834432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:54.863517+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 2834432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:55.863652+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 2834432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:56.864001+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 2834432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:57.864158+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 2834432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:58.864292+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 2834432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:00:59.864431+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 2834432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:00.864569+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 2834432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:01.864774+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:02.864972+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:03.865159+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:04.865315+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:05.865480+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:06.865615+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.932530403s of 16.247087479s, submitted: 1
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:07.865751+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:08.865929+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:09.866404+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:10.866601+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:11.866878+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:12.867189+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:13.867453+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 2826240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:14.867776+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 2818048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:15.867962+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 2818048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:16.868171+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 2818048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:17.868390+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 2818048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:18.868683+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 2818048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:19.868902+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 2818048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:20.869310+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 2818048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:21.869594+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 2818048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:22.869841+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:23.870009+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:24.870216+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:25.870397+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:26.870531+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:27.870663+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:28.870959+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:29.871145+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:30.871280+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:31.871458+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:32.871688+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:33.875290+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:34.875496+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:35.875639+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:36.875777+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:37.875922+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:38.876102+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:39.876282+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:40.876449+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:41.876626+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:42.876807+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:43.877573+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:44.877734+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:45.877981+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:46.878207+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 2809856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:47.878378+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:48.878555+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:49.878678+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:50.878849+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:51.879055+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:52.879248+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:53.879412+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:54.879541+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:55.879701+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:56.879882+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:57.880075+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:58.880208+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:01:59.880364+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:00.880500+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:01.880713+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:02.884304+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 2801664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:03.884421+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:04.884574+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:05.884824+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:06.884985+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:07.885142+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:08.885314+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:09.885473+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:10.885660+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:11.885866+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:12.885991+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:13.886138+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:14.886327+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:15.886565+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:16.887290+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:17.887887+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:18.888546+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:19.888669+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:20.889046+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:21.889203+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:22.889332+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:23.889482+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:24.889694+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271fe400 session 0x563f25db21e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:25.889838+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:26.889970+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:27.890310+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:28.890443+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:29.890570+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:30.890719+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:31.890990+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:32.891189+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:33.891294+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:34.893179+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 2793472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:35.893321+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 89.247039795s of 89.251152039s, submitted: 1
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:36.893453+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:37.893628+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:38.893768+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:39.893995+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:40.894157+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:41.894461+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948585 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:42.894611+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:43.894777+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f2631f400 session 0x563f271ebe00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f25d1fc00 session 0x563f24d201e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:44.894901+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:45.895029+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f242d4800 session 0x563f25d64960
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d58800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:46.895310+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948585 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:47.895496+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 2777088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:48.896039+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 2777088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:49.896828+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 2777088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.475939751s of 14.498519897s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:50.896966+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 2777088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:51.897142+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 2777088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948453 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:52.897350+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 2777088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:53.897516+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 2777088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:54.897670+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:55.897797+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:56.897942+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948585 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:57.898142+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:58.898342+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:02:59.898484+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:00.898629+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 2785280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ffc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:01.898895+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950097 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:02.899045+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:03.899267+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:04.899435+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:05.899570+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.121311188s of 15.132826805s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:06.899778+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:07.899907+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949506 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:08.900044+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:09.900180+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:10.900331+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:11.900530+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:12.900712+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948783 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:13.900870+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:14.901022+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:15.901188+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:16.901399+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:17.901525+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948783 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:18.901676+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:19.901812+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:20.901957+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:21.902143+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:22.902343+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948783 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:23.902466+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:24.902682+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:25.902820+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:26.902937+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:27.903061+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948783 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:28.903179+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 2768896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:29.903977+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:30.904164+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:31.904449+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:32.904589+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948783 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:33.904788+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f26362c00 session 0x563f270843c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f25ea2c00 session 0x563f23dd92c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:34.904962+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:35.905149+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:36.905317+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:37.905432+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948783 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:38.905591+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:39.905747+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:40.905915+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:41.906088+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:42.906265+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948783 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:43.906398+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.731658936s of 38.722965240s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:44.906526+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:45.906647+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:46.906830+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 2760704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:47.906967+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948915 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:48.907108+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f23e89c00 session 0x563f272b23c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f23e89800 session 0x563f271d83c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:49.907263+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:50.907380+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:51.907546+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:52.907755+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948915 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:53.907952+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:54.908148+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:55.908287+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:56.908505+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:57.908625+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948915 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:58.908828+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.309248924s of 14.313110352s, submitted: 1
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:03:59.908971+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 2752512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:00.909132+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 2744320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:01.909328+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 2744320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:02.909619+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 2744320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950427 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:03.910147+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:04.910393+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:05.910527+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:06.910658+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:07.910785+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950427 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:08.910899+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:09.911045+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:10.911270+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:11.911431+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:12.911616+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950427 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:13.911768+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:14.911928+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:15.912054+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.317342758s of 17.328180313s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:16.912206+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:17.912302+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950295 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:18.912423+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:19.912565+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:20.912718+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:21.912905+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:22.913063+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950295 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:23.913177+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:24.913312+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:25.913511+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:26.913657+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:27.913829+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950295 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:28.914010+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:29.914159+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:30.914297+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:31.914457+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:32.914582+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950295 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:33.914701+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:34.914841+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:35.915541+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:36.915707+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:37.915916+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950295 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:38.916059+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:39.916175+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:40.916320+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:41.916497+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:42.916632+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950295 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:43.916797+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:44.916906+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:45.917035+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:46.917179+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:47.917298+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950295 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:48.917443+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:49.917606+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:50.917855+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:51.918088+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:52.918252+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950295 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:53.918416+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:54.918635+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f23e89c00 session 0x563f24c3fa40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:55.918770+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:56.918956+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:57.919096+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271ffc00 session 0x563f271fc3c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f242d4800 session 0x563f26398d20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 1671168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950295 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: mgrc ms_handle_reset ms_handle_reset con 0x563f25ea3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/17115915
Dec 05 10:27:35 compute-0 ceph-osd[82677]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/17115915,v1:192.168.122.100:6801/17115915]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: get_auth_request con 0x563f271fe400 auth_method 0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: mgrc handle_mgr_configure stats_period=5
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:58.919222+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 1548288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:04:59.919387+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 1548288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:00.919515+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 1548288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:01.919680+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 1548288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:02.919808+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 1548288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950295 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:03.919955+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:04.920104+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:05.920313+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.200775146s of 50.205757141s, submitted: 1
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:06.920467+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Cumulative writes: 9103 writes, 34K keys, 9103 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9103 writes, 2173 syncs, 4.19 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 733 writes, 1103 keys, 733 commit groups, 1.0 writes per commit group, ingest: 0.36 MB, 0.00 MB/s
                                           Interval WAL: 733 writes, 364 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.84              0.00         1    0.840       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.8 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d29b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f225d3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:07.920593+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950427 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:08.920738+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:09.920855+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:10.920971+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:11.921121+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:12.921306+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952071 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:13.925316+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:14.925465+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:15.925622+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:16.925794+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.248160362s of 11.260050774s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:17.925938+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951480 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:18.926111+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:19.926322+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:20.926458+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:21.926630+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:22.926767+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 1531904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951348 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:23.926927+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:24.927159+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:25.927350+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:26.927503+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:27.927635+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951216 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:28.927767+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:29.927892+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:30.928040+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:31.928269+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:32.928410+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951216 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:33.928554+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:34.928713+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f23e89c00 session 0x563f24c3d2c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:35.928871+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:36.929278+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:37.929406+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951216 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:38.929539+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:39.929696+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:40.929837+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:41.930042+0000)
Dec 05 10:27:35 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26303 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:42.930224+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 1523712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951216 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:43.930394+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 1515520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:44.930532+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 1515520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:45.930682+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 1515520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.737363815s of 28.748731613s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:46.930819+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 1515520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:47.931008+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951348 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:48.931179+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:49.931321+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:50.931458+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:51.931629+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:52.931769+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952860 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fec00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:53.931922+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:54.932048+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:55.937667+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.229444504s of 10.239780426s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:56.937881+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:57.938044+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953781 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:58.938194+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:59.938356+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:00.938499+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:01.938681+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:02.938824+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:03.938970+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:04.939101+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:05.939263+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:06.939420+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:07.939527+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:08.939651+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:09.939797+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:10.939959+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:11.940212+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:12.940381+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:13.940512+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:14.940699+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:15.940848+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:16.940970+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:17.941070+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:18.941209+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:19.941351+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:20.941530+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:21.941765+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:22.941909+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:23.942096+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:24.942416+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:25.942632+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:26.942825+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:27.942997+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:28.943218+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:29.943451+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:30.943632+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:31.943876+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:32.944119+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:33.944324+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:34.944445+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:35.944611+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:36.944864+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:37.945028+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:38.945263+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:39.945639+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:40.945773+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:41.945981+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:42.946169+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:43.946332+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:44.946446+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:45.946542+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:46.946675+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:47.946814+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:48.946943+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:49.947066+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:50.947337+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:51.947517+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:52.947644+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:53.947779+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:54.947888+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.823249817s of 58.867816925s, submitted: 2
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:55.948110+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:57.286323+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:58.286459+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953721 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:59.286590+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 2449408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:00.286704+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:01.286831+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:02.287011+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:03.287132+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:04.287749+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:05.287867+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:06.287968+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:07.288110+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:08.288242+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:09.288347+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:10.290556+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:11.290693+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:12.290845+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:13.290958+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:14.291102+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:15.291284+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:16.291418+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:17.291529+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:18.291734+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271fe800 session 0x563f26e7eb40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f242d4800 session 0x563f272b32c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:19.291852+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:20.291994+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:21.292178+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:22.292375+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:23.292687+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:24.292839+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:25.293070+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:26.293276+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:27.293408+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:28.293592+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:29.293707+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.996955872s of 34.081897736s, submitted: 213
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:30.293869+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:31.294015+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:32.294192+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:33.294343+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953781 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:34.294504+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:35.294673+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:36.294855+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:37.295040+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:38.295181+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953190 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:39.361125+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:40.361335+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:41.361472+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.063858986s of 12.074111938s, submitted: 2
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:42.361780+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:43.363016+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:44.363184+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:45.363676+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:46.364365+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:47.364561+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:48.364675+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:49.364895+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:50.365026+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:51.365328+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:52.365567+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:53.365879+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:54.366117+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:55.366350+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:56.366498+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:57.366619+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:58.366752+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:59.366881+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:00.366998+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:01.367118+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:02.367332+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:03.367465+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271fec00 session 0x563f26e7f2c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271fe000 session 0x563f24066b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:04.367883+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:05.368136+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:06.368433+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:07.368589+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:08.368724+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:09.369907+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:10.370089+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 2359296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:11.371399+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 2359296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:12.371557+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 2359296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:13.371732+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 2359296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.860660553s of 32.466499329s, submitted: 120
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:14.371890+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:15.372137+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:16.372372+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:17.372598+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:18.372753+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954111 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:19.372948+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:20.373133+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:21.373378+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:22.373608+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271ff400 session 0x563f2707c3c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271ff000 session 0x563f2639e3c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:23.373754+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954111 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:24.373963+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:25.374145+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:26.374370+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:27.374660+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:28.374856+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954111 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:29.375116+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:30.375305+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:31.375489+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.264575958s of 17.273166656s, submitted: 2
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:32.375702+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:33.375873+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954111 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:34.376028+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:35.376169+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:36.376305+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:37.376520+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:38.376647+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955623 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:39.376835+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:40.376952+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:41.377093+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:42.377313+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.155375481s of 11.168437958s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:43.377452+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:44.377602+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955032 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:45.377780+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:46.377851+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:47.377982+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:48.378107+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:49.378325+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955032 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:50.378473+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:51.378624+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:52.378824+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:53.378959+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:54.379092+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:55.379265+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:56.379419+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:57.379558+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:58.379712+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:59.379845+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:00.380008+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:01.380172+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:02.380396+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:03.380563+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:04.380788+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:05.380941+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:06.381102+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:07.381296+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:08.381432+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:09.381585+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:10.381712+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:11.381857+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:12.382067+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:13.382394+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:14.382510+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:15.382681+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f23e89c00 session 0x563f23eade00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:16.382950+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:17.383142+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:18.383325+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:19.384308+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:20.384509+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:21.384887+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:22.385139+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:23.385341+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:24.385578+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:25.385725+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.550487518s of 43.558589935s, submitted: 2
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:26.385894+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:27.386067+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:28.386308+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:29.386579+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956544 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:30.386754+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:31.387001+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:32.387184+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:33.387354+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:34.387546+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956544 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:35.387695+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:36.387866+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:37.388017+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:38.388184+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.983158112s of 12.994561195s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:39.388364+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:40.388531+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:41.388722+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:42.388978+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:43.389170+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:44.389386+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:45.389535+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:46.389644+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:47.389764+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:48.389906+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:49.390049+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:50.390222+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:51.390392+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:52.390558+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:53.390756+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:54.390943+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:55.391088+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:56.391293+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:57.391431+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:58.391550+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:59.391701+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:00.391852+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:01.392032+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:02.392286+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:03.392442+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:04.392565+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:05.392687+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:06.393007+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:07.393177+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:08.393302+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:09.393501+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:10.393678+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:11.393846+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:12.394083+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:13.394279+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:14.394447+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:15.394619+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:16.394797+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:17.394957+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:18.395112+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:19.395269+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:20.395416+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:21.395553+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:22.395756+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:23.395932+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:24.396150+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:25.396285+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:26.396442+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:27.396637+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:28.396835+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:29.397084+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:30.397313+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:31.397603+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:32.397866+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271fe800 session 0x563f26e7ef00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:33.398037+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:34.398280+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:35.398526+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:36.398661+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:37.398846+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:38.398971+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:39.399147+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:40.399329+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:41.399556+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:42.399786+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fec00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.717926025s of 63.721668243s, submitted: 1
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:43.399925+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:44.400134+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955953 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:45.400401+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:46.400570+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:47.400729+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:48.400883+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:49.401035+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958977 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:50.401224+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:51.401565+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:52.401773+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:53.401991+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:54.402209+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958386 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:55.402407+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.878764153s of 13.278193474s, submitted: 4
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:56.402546+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:57.402759+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:58.402942+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:59.403134+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:00.403367+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:01.403618+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:02.403813+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:03.404457+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:04.404630+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:05.404788+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:06.404934+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:07.405054+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:08.405175+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:09.405343+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:10.409726+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:11.409885+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:12.410102+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:13.410286+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:14.410459+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:15.410602+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:16.410737+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:17.410891+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:18.411077+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:19.411270+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:20.411431+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:21.411617+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:22.411858+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:23.412022+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:24.412181+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:25.412341+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:26.412470+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:27.412613+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:28.412736+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:29.412857+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:30.413007+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:31.413145+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:32.413324+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:33.413460+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:34.413643+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:35.437845+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:36.438010+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:37.438201+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:38.438430+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:39.438611+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:40.438754+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:41.438877+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:42.439040+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:43.439183+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:44.439318+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:45.439805+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:46.440011+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:47.440169+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:48.440324+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:49.440455+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ffc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:50.440577+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 2277376 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.217182159s of 54.254943848s, submitted: 1
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:51.440730+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 2277376 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 132 ms_handle_reset con 0x563f271fec00 session 0x563f272b3680
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0xe31b5/0x193000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:52.441010+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 18898944 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 134 ms_handle_reset con 0x563f271ffc00 session 0x563f23e065a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:53.441193+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:54.443691+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 86155264 unmapped: 18669568 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 135 ms_handle_reset con 0x563f23e89c00 session 0x563f26fd74a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080728 data_alloc: 218103808 data_used: 155648
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:55.450537+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 18661376 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:56.450782+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 18661376 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fb66f000/0x0/0x4ffc00000, data 0x10e73f8/0x119b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:57.451099+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 18661376 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:58.451338+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:59.451461+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082722 data_alloc: 218103808 data_used: 155648
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:00.451621+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66d000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:01.451768+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.318701744s of 11.500458717s, submitted: 46
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:02.451987+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:03.452355+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:04.452656+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66d000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082854 data_alloc: 218103808 data_used: 155648
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:05.452870+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:06.453018+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:07.453219+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:08.453441+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:09.453616+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083526 data_alloc: 218103808 data_used: 155648
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:10.453785+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:11.453937+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:12.454229+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:13.454460+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:14.454679+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082344 data_alloc: 218103808 data_used: 155648
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:15.454855+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:16.455007+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:17.455173+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.933827400s of 15.960062981s, submitted: 4
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:18.455357+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 18857984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:19.455522+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 18857984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082212 data_alloc: 218103808 data_used: 155648
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:20.455679+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:21.455834+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:22.456017+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:23.456280+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:24.456506+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082212 data_alloc: 218103808 data_used: 155648
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:25.456661+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:26.456791+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 ms_handle_reset con 0x563f271ff000 session 0x563f271ead20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 ms_handle_reset con 0x563f271fe800 session 0x563f272b2b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:27.456945+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:28.457169+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:29.457353+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082212 data_alloc: 218103808 data_used: 155648
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:30.457591+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:31.457858+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:32.458074+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:33.458249+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:34.458481+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:35.458636+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082212 data_alloc: 218103808 data_used: 155648
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:36.458957+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:37.459582+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 ms_handle_reset con 0x563f271ff400 session 0x563f24c3eb40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.366529465s of 20.370347977s, submitted: 1
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:38.459749+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 11108352 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 ms_handle_reset con 0x563f23e89c00 session 0x563f263985a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:39.459916+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 11108352 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:40.460102+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101952 data_alloc: 218103808 data_used: 6975488
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 11108352 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:41.460255+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93724672 unmapped: 11100160 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 138 ms_handle_reset con 0x563f271ff000 session 0x563f271d8780
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:42.460463+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93724672 unmapped: 14254080 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:43.460717+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 14237696 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fafcc000/0x0/0x4ffc00000, data 0x17875f6/0x183e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:44.460886+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 14237696 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:45.461105+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157320 data_alloc: 218103808 data_used: 6975488
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 14237696 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:46.461320+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 14237696 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fafcc000/0x0/0x4ffc00000, data 0x17875f6/0x183e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:47.461483+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 14237696 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ffc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 138 ms_handle_reset con 0x563f271ffc00 session 0x563f26e18780
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _renew_subs
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.707313538s of 10.020560265s, submitted: 16
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:48.461610+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 94076928 unmapped: 13901824 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f2631f400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:49.461747+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 8994816 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:50.461902+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208862 data_alloc: 234881024 data_used: 13901824
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100474880 unmapped: 7503872 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:51.462061+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100483072 unmapped: 7495680 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:52.462355+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fafa6000/0x0/0x4ffc00000, data 0x17ad5c8/0x1865000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7487488 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:53.462541+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7487488 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:54.462688+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7487488 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:55.462860+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207299 data_alloc: 234881024 data_used: 13901824
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7487488 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:56.463007+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7487488 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:57.463147+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7479296 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fafa7000/0x0/0x4ffc00000, data 0x17ad5c8/0x1865000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:58.463311+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7479296 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:59.463482+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fafa7000/0x0/0x4ffc00000, data 0x17ad5c8/0x1865000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f271fc000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f2701eb40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100507648 unmapped: 7471104 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:00.463707+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207299 data_alloc: 234881024 data_used: 13901824
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100507648 unmapped: 7471104 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.999906540s of 13.015459061s, submitted: 11
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:01.463994+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105619456 unmapped: 4464640 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:02.464366+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0x21085c8/0x21c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 4890624 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:03.464598+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 4808704 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:04.464876+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 4808704 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:05.465165+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295361 data_alloc: 234881024 data_used: 14913536
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 4808704 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:06.465428+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 4808704 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:07.465690+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 4808704 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:08.465909+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 4800512 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:09.466086+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 4800512 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:10.466279+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296273 data_alloc: 234881024 data_used: 14983168
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 4759552 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:11.466437+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 4759552 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:12.466632+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 4759552 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:13.466837+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 4759552 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:14.467044+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:15.467221+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296405 data_alloc: 234881024 data_used: 14983168
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:16.467489+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:17.467648+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:18.467841+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:19.468060+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.198905945s of 18.406009674s, submitted: 58
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:20.468251+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294974 data_alloc: 234881024 data_used: 14983168
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:21.468443+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:22.468674+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89c00 session 0x563f2639e000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f26b33c20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:23.468816+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f25a3dc20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ffc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ffc00 session 0x563f26d574a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26362c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26362c00 session 0x563f2709e3c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:24.468937+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105914368 unmapped: 4169728 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89c00 session 0x563f23e04d20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa640000/0x0/0x4ffc00000, data 0x21135d8/0x21cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,1,2,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f271d85a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26362c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:25.469080+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26362c00 session 0x563f25a3d680
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f26d46b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ffc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ffc00 session 0x563f26d46d20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89c00 session 0x563f26d46f00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367848 data_alloc: 234881024 data_used: 15507456
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 19677184 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:26.469403+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 19677184 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:27.469601+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 19677184 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:28.469942+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c6a000/0x0/0x4ffc00000, data 0x2ae95d8/0x2ba2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f26d472c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 19644416 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:29.470133+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 19644416 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:30.470322+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26362c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26362c00 session 0x563f26d47680
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367848 data_alloc: 234881024 data_used: 15507456
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 19644416 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f26d47860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ffc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:31.470503+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.197183609s of 11.661909103s, submitted: 10
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ffc00 session 0x563f26d47a40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 20185088 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:32.470713+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c69000/0x0/0x4ffc00000, data 0x2ae95fb/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 20152320 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:33.470971+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105709568 unmapped: 20119552 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:34.471145+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115228672 unmapped: 10600448 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:35.471293+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439221 data_alloc: 234881024 data_used: 25825280
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c69000/0x0/0x4ffc00000, data 0x2ae95fb/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 10567680 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:36.471458+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 10567680 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:37.472300+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 10567680 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:38.472423+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 10567680 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c69000/0x0/0x4ffc00000, data 0x2ae95fb/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:39.473038+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 10534912 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:40.473513+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439221 data_alloc: 234881024 data_used: 25825280
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 10534912 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:41.473691+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 10534912 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:42.473871+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c69000/0x0/0x4ffc00000, data 0x2ae95fb/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 10502144 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:43.474069+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c69000/0x0/0x4ffc00000, data 0x2ae95fb/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 10502144 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:44.474347+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.484658241s of 13.498271942s, submitted: 4
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 7168000 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:45.474606+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466331 data_alloc: 234881024 data_used: 25829376
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 7168000 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:46.474949+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 7168000 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff400 session 0x563f271fc3c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe800 session 0x563f271ebc20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:47.475120+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 6815744 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:48.475291+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 7397376 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:49.475407+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8712000/0x0/0x4ffc00000, data 0x2ea05fb/0x2f5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 7397376 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:50.475520+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1478667 data_alloc: 234881024 data_used: 26726400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 7397376 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:51.475787+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8712000/0x0/0x4ffc00000, data 0x2ea05fb/0x2f5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7364608 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:52.476027+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7364608 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:53.476193+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7364608 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:54.476346+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8712000/0x0/0x4ffc00000, data 0x2ea05fb/0x2f5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 7331840 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:55.476513+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1478667 data_alloc: 234881024 data_used: 26726400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 7331840 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:56.476683+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8712000/0x0/0x4ffc00000, data 0x2ea05fb/0x2f5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 7331840 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:57.477030+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89c00 session 0x563f25a3d0e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 7331840 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.281052589s of 13.395196915s, submitted: 38
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:58.477186+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 15196160 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:59.477375+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f272081e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:00.477642+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f94a0000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303085 data_alloc: 234881024 data_used: 15507456
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:01.477930+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:02.478219+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:03.478533+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:04.478696+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:05.478802+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303085 data_alloc: 234881024 data_used: 15507456
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:06.479010+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f94a0000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:07.479192+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f2631f400 session 0x563f2707cb40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25ea2c00 session 0x563f26e18b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.288631439s of 10.002907753s, submitted: 27
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f94a0000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:08.479399+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89c00 session 0x563f271fc5a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 20455424 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:09.479661+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 20455424 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:10.479821+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127863 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 20455424 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:11.479998+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 20455424 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:12.480205+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 20455424 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:13.480400+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:14.480536+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:15.480802+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:16.481016+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:17.481177+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:18.481347+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:19.481557+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:20.481711+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:21.481849+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:22.482046+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:23.482167+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:24.482327+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:25.482495+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:26.482653+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:27.482743+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:28.482888+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:29.482971+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:30.483085+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:31.483268+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:32.483477+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:33.483594+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:34.483768+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:35.483923+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:36.484040+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:37.484171+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:38.484293+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:39.484453+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:40.484568+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:41.484672+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:42.484877+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:43.485041+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.049350739s of 36.060447693s, submitted: 3
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:44.485138+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f26774780
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 26173440 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:45.485319+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181881 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 26173440 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:46.485461+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 26173440 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:47.485625+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 26173440 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:48.485826+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d43000/0x0/0x4ffc00000, data 0x18715c8/0x1929000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 26173440 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:49.486004+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe800 session 0x563f26d463c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f2639e3c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 26107904 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:50.486138+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183518 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 26107904 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:51.486307+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:52.486475+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:53.486598+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d42000/0x0/0x4ffc00000, data 0x18715eb/0x192a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:54.486793+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:55.486941+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236870 data_alloc: 234881024 data_used: 15372288
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:56.487110+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:57.487382+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:58.487583+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d42000/0x0/0x4ffc00000, data 0x18715eb/0x192a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:59.487747+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d42000/0x0/0x4ffc00000, data 0x18715eb/0x192a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:00.487888+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236870 data_alloc: 234881024 data_used: 15372288
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:01.488058+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.800022125s of 17.457147598s, submitted: 10
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:02.488319+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d42000/0x0/0x4ffc00000, data 0x18715eb/0x192a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:03.488457+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d42000/0x0/0x4ffc00000, data 0x18715eb/0x192a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:04.488628+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 20545536 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:05.488809+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 20545536 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287764 data_alloc: 234881024 data_used: 15491072
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:06.488936+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 21200896 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.9 total, 600.0 interval
                                           Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2676 syncs, 3.83 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1133 writes, 3083 keys, 1133 commit groups, 1.0 writes per commit group, ingest: 2.58 MB, 0.00 MB/s
                                           Interval WAL: 1133 writes, 503 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:07.489081+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 21200896 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:08.489255+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 21200896 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:09.489995+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:10.490546+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288676 data_alloc: 234881024 data_used: 15749120
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:11.490949+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:12.491321+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:13.491581+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:14.491841+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:15.492079+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.498682976s of 13.849031448s, submitted: 41
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288544 data_alloc: 234881024 data_used: 15749120
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:16.492296+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:17.492825+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:18.493143+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:19.493586+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:20.493859+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288544 data_alloc: 234881024 data_used: 15749120
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:21.494316+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:22.494758+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:23.495151+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:24.495363+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:25.495658+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288544 data_alloc: 234881024 data_used: 15749120
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:26.495873+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:27.496122+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f23dd9680
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.286449432s of 12.293078423s, submitted: 1
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25d65860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:28.496391+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:29.496582+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:30.496744+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136804 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:31.496925+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:32.497208+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:33.497480+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:34.497670+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:35.497815+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136804 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:36.498006+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:37.498282+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:38.498481+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:39.498691+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:40.498854+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136804 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:41.498992+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:42.499144+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:43.499296+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:44.499601+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:45.499767+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136804 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:46.500012+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:47.500502+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:48.500780+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:49.501205+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:50.501396+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136804 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:51.501715+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:52.502024+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:53.502286+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:54.502475+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:55.502755+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.865468979s of 27.915294647s, submitted: 19
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25ea2c00 session 0x563f2615af00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe800 session 0x563f25db4960
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff400 session 0x563f24c3ef00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25db2b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25ea2c00 session 0x563f23e041e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172653 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:56.503044+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:57.503302+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:58.503516+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:59.503737+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:00.503913+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172653 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:01.504168+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:02.504413+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe800 session 0x563f25db5a40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:03.504566+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f267750e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:04.504714+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26362c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26362c00 session 0x563f24c305a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f24c50b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104611840 unmapped: 26476544 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:05.504863+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174718 data_alloc: 218103808 data_used: 7532544
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:06.505076+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 26140672 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:07.505262+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:08.505503+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:09.505673+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:10.505843+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206638 data_alloc: 234881024 data_used: 12292096
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:11.506072+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:12.506321+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:13.506492+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:14.506627+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:15.506844+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:16.507041+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206638 data_alloc: 234881024 data_used: 12292096
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.268453598s of 21.364921570s, submitted: 22
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:17.507215+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 19488768 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:18.507442+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21020672 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:19.507606+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x1c265c8/0x1cde000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 20979712 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:20.507851+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 20930560 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:21.508060+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274556 data_alloc: 234881024 data_used: 12996608
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 20889600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:22.508320+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 20889600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:23.508476+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 20889600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f997c000/0x0/0x4ffc00000, data 0x1c385c8/0x1cf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:24.508806+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 20889600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:25.508967+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 20889600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:26.509114+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274876 data_alloc: 234881024 data_used: 13004800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:27.509311+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:28.509418+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f997c000/0x0/0x4ffc00000, data 0x1c385c8/0x1cf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:29.509570+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:30.509745+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:31.509912+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275788 data_alloc: 234881024 data_used: 13074432
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:32.510129+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:33.510413+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f997c000/0x0/0x4ffc00000, data 0x1c385c8/0x1cf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:34.510587+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:35.510734+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:36.510898+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275788 data_alloc: 234881024 data_used: 13074432
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:37.511075+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:38.511336+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f997c000/0x0/0x4ffc00000, data 0x1c385c8/0x1cf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 20815872 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:39.511553+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 20815872 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:40.511745+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 20815872 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:41.511913+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275788 data_alloc: 234881024 data_used: 13074432
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 20815872 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:42.512151+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f27085860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f24c3d680
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 20144128 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:43.512314+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f997c000/0x0/0x4ffc00000, data 0x1c385c8/0x1cf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f24c56800 session 0x563f249e9e00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d89c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 20037632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f24c56800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.912588120s of 26.871160507s, submitted: 58
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f24c56800 session 0x563f23370d20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:44.512451+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e88000 session 0x563f23ff6d20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e88000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:45.512645+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:46.512961+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287814 data_alloc: 234881024 data_used: 13074432
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:47.513167+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d555c8/0x1e0d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:48.513360+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d555c8/0x1e0d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f24c56800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f24c56800 session 0x563f24c31a40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:49.513485+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:50.513596+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d555c8/0x1e0d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:51.513800+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f24c30960
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d555c8/0x1e0d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288118 data_alloc: 234881024 data_used: 13074432
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985d000/0x0/0x4ffc00000, data 0x1d565c8/0x1e0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109142016 unmapped: 21946368 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:52.514034+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f24c30000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f25de3a40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109142016 unmapped: 21946368 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:53.514174+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109142016 unmapped: 21946368 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:54.514298+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.773413658s of 10.834179878s, submitted: 11
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 23470080 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:55.514414+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d565d8/0x1e0f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108871680 unmapped: 22216704 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:56.514532+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295780 data_alloc: 234881024 data_used: 13955072
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 22126592 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:57.514680+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 22118400 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:58.514819+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 22118400 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:59.514965+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 22110208 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:00.515122+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 22110208 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:01.515344+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f944d000/0x0/0x4ffc00000, data 0x1d565d8/0x1e0f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295780 data_alloc: 234881024 data_used: 13955072
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 22102016 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:02.515725+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 22102016 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:03.516010+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 22102016 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f944d000/0x0/0x4ffc00000, data 0x1d565d8/0x1e0f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:04.516194+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 22102016 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:05.516367+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 22102016 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.720484734s of 11.360827446s, submitted: 234
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:06.516517+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336510 data_alloc: 234881024 data_used: 13955072
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f944d000/0x0/0x4ffc00000, data 0x1d565d8/0x1e0f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 16007168 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:07.516657+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 16580608 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:08.516897+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 16572416 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:09.517103+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 16572416 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:10.517630+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 16572416 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d7f000/0x0/0x4ffc00000, data 0x24245d8/0x24dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:11.517788+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354850 data_alloc: 234881024 data_used: 15097856
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d7f000/0x0/0x4ffc00000, data 0x24245d8/0x24dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 16572416 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:12.518019+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d7f000/0x0/0x4ffc00000, data 0x24245d8/0x24dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 16695296 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:13.518187+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 16695296 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:14.518363+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 16695296 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:15.518531+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f25de21e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f25de3860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.546108246s of 10.014736176s, submitted: 83
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:16.518652+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f2639fc20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d5e000/0x0/0x4ffc00000, data 0x24455d8/0x24fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280789 data_alloc: 234881024 data_used: 13074432
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:17.518786+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c395c8/0x1cf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:18.518925+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:19.519113+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:20.519301+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c395c8/0x1cf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:21.519516+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280789 data_alloc: 234881024 data_used: 13074432
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:22.519800+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25ea2c00 session 0x563f2707c960
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe800 session 0x563f23dda960
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f24c56800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 19415040 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c395c8/0x1cf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:23.519950+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f24c56800 session 0x563f23dd9c20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:24.520162+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:25.520388+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:26.520644+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:27.520892+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:28.521136+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:29.521359+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:30.521509+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:31.521712+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:32.521887+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:33.522067+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:34.522337+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:35.522518+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:36.522702+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:37.522856+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:38.523114+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:39.523302+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:40.523531+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:41.523749+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:42.523981+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:43.524167+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:44.524361+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe000 session 0x563f25d62960
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f2639e960
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:45.524573+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d58800 session 0x563f249e9a40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:46.524761+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.601390839s of 30.812540054s, submitted: 44
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:47.524949+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111181824 unmapped: 19906560 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25ea3000 session 0x563f24d214a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d58800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:48.525124+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 19865600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:49.525355+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 19824640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:50.525560+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 19824640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:51.525753+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 19824640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:52.525910+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 19816448 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f26b33e00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26b325a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:53.526052+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 23322624 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:54.526285+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 23322624 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:55.526452+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 23322624 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25db3a40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26d46d20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:56.527006+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 23314432 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182372 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f23dd9e00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f26d46b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:57.527162+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 23625728 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.542368889s of 10.452962875s, submitted: 147
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x14b962a/0x1572000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:58.527350+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 23625728 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:59.527495+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 23609344 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:00.527627+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 23609344 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:01.528017+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 23609344 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210105 data_alloc: 234881024 data_used: 11472896
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:02.528272+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 23609344 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:03.528452+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 23609344 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x14b962a/0x1572000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:04.528722+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 23601152 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:05.528900+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 23601152 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:06.529091+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 23601152 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210105 data_alloc: 234881024 data_used: 11472896
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:07.529334+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 23601152 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x14b962a/0x1572000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:08.529567+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 23601152 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:09.529797+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.902854919s of 11.906520844s, submitted: 1
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 23592960 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:10.529934+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 23027712 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:11.530512+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 23027712 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251197 data_alloc: 234881024 data_used: 11472896
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:12.530694+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 23019520 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:13.530836+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:14.531139+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:15.531333+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:16.531532+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:17.531692+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:18.531878+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:19.532074+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:20.532423+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:21.532608+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:22.534401+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:23.534563+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:24.534697+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:25.534842+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:26.535141+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:27.535390+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:28.536604+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:29.536965+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:30.537873+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:31.538004+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:32.538388+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:33.538935+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:34.539592+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:35.539779+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:36.539949+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:37.540094+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:38.540268+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:39.540432+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:40.540569+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:41.540702+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:42.540875+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:43.541058+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.127941132s of 34.422428131s, submitted: 37
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:44.541251+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 23838720 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:45.541430+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 23838720 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:46.548119+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 23838720 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252763 data_alloc: 234881024 data_used: 11472896
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9751000/0x0/0x4ffc00000, data 0x1a5162a/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:47.548312+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 23322624 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f25de21e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f271eab40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f23370d20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f24c50b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f25de3a40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:48.548498+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 23306240 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:49.548662+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 23306240 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f25db21e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:50.548825+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 23306240 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89800 session 0x563f263981e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:51.549051+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26fd7860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 23314432 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2707cb40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299412 data_alloc: 234881024 data_used: 11476992
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f91fa000/0x0/0x4ffc00000, data 0x1fa864d/0x2062000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:52.549295+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 23363584 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:53.549486+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113483776 unmapped: 21282816 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:54.549668+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f91fa000/0x0/0x4ffc00000, data 0x1fa864d/0x2062000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114458624 unmapped: 20307968 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:55.549860+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114458624 unmapped: 20307968 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:56.550048+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114458624 unmapped: 20307968 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337676 data_alloc: 234881024 data_used: 16990208
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:57.550224+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 20299776 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:58.550470+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 20299776 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:59.550650+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 20299776 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:00.550786+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f91fa000/0x0/0x4ffc00000, data 0x1fa864d/0x2062000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 20299776 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:01.550923+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 20291584 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337676 data_alloc: 234881024 data_used: 16990208
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:02.551121+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 20291584 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:03.551260+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 20291584 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:04.551377+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.942338943s of 20.544075012s, submitted: 30
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f91fa000/0x0/0x4ffc00000, data 0x1fa864d/0x2062000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,2])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 15704064 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:05.551528+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 14262272 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:06.551685+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 14278656 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f896f000/0x0/0x4ffc00000, data 0x283364d/0x28ed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409154 data_alloc: 234881024 data_used: 17223680
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:07.551811+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8940000/0x0/0x4ffc00000, data 0x286264d/0x291c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120537088 unmapped: 14229504 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:08.551993+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120537088 unmapped: 14229504 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:09.552143+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8940000/0x0/0x4ffc00000, data 0x286264d/0x291c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 14221312 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:10.552282+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 14221312 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:11.552427+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 14180352 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406866 data_alloc: 234881024 data_used: 17223680
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:12.552606+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 14180352 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:13.552742+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 14180352 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:14.552950+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f893d000/0x0/0x4ffc00000, data 0x286564d/0x291f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 14180352 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:15.553095+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 14172160 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:16.553296+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 14172160 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406866 data_alloc: 234881024 data_used: 17223680
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:17.553454+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 14172160 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:18.553628+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f893d000/0x0/0x4ffc00000, data 0x286564d/0x291f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 14172160 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.714754105s of 14.909518242s, submitted: 90
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f26fd72c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f249e8b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:19.553772+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8e400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 19021824 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8e400 session 0x563f263992c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:20.553952+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:21.554133+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263838 data_alloc: 234881024 data_used: 11476992
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:22.554329+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9751000/0x0/0x4ffc00000, data 0x1a5162a/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:23.554490+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:24.554668+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:25.554860+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9751000/0x0/0x4ffc00000, data 0x1a5162a/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:26.555186+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263838 data_alloc: 234881024 data_used: 11476992
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:27.555580+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:28.556055+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:29.556316+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9751000/0x0/0x4ffc00000, data 0x1a5162a/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:30.556595+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:31.556722+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263838 data_alloc: 234881024 data_used: 11476992
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:32.556889+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:33.557081+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9751000/0x0/0x4ffc00000, data 0x1a5162a/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:34.557266+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.300795555s of 15.466802597s, submitted: 51
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe000 session 0x563f25db34a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26d47e00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 18997248 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:35.557410+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 18997248 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:36.557698+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171956 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:37.557916+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f25d64f00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:38.558092+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:39.558342+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:40.558532+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:41.558729+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171364 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:42.558969+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:43.559185+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:44.559392+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:45.559600+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:46.559794+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171364 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:47.559956+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:48.560096+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:49.560309+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:50.560499+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:51.560669+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171364 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:52.560935+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:53.561143+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:54.561328+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:55.561581+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:56.561767+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171364 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:57.561982+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: mgrc ms_handle_reset ms_handle_reset con 0x563f271fe400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/17115915
Dec 05 10:27:35 compute-0 ceph-osd[82677]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/17115915,v1:192.168.122.100:6801/17115915]
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: get_auth_request con 0x563f271ff800 auth_method 0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: mgrc handle_mgr_configure stats_period=5
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:58.562225+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:59.562515+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:00.562668+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:01.562873+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171364 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:02.563081+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:03.563188+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:04.563380+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:05.563567+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:06.563666+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2615a3c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f25db3860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f23dd8f00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112730112 unmapped: 22036480 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f27208f00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.416463852s of 32.573806763s, submitted: 31
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173122 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:07.563808+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f25de32c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe000 session 0x563f2701f2c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:08.563970+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x149e62a/0x1557000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:09.564157+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:10.564340+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:11.564521+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208357 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:12.564980+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f2701e780
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:13.565100+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x149e62a/0x1557000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f2701e000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:14.566702+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2701ed20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:15.566799+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 21905408 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:16.566941+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 21905408 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:17.567102+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212353 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 21905408 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:18.567313+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.259483337s of 11.413383484s, submitted: 40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26774d20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 21905408 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:19.567491+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 21905408 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:20.567668+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:21.567853+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:22.568076+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237597 data_alloc: 234881024 data_used: 11083776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:35] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec 05 10:27:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:35] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:23.568287+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:24.568439+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:25.568726+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:26.568879+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:27.569064+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237597 data_alloc: 234881024 data_used: 11083776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f23e04960
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe000 session 0x563f267754a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:28.569369+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.971426010s of 10.051115036s, submitted: 4
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26fd7680
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:29.569570+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:30.569763+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:31.570404+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:32.571010+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179220 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:33.571495+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:34.571909+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:35.572323+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:36.572508+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:37.572722+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179220 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:38.572886+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.104510307s of 10.235854149s, submitted: 39
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25d63860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f25db2b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f272b2d20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe000 session 0x563f23eae780
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:39.573086+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26d57680
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 26804224 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:40.573336+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 26804224 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:41.574374+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 26804224 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:42.575144+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9b23000/0x0/0x4ffc00000, data 0x16815c8/0x1739000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 26804224 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221851 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:43.575369+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 26804224 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:44.575604+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 26771456 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9b23000/0x0/0x4ffc00000, data 0x16815c8/0x1739000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:45.575840+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 24805376 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:46.576053+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 24805376 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:47.576262+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 24805376 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261351 data_alloc: 234881024 data_used: 13336576
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2707c000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f272b2960
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:48.576396+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:49.576573+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:50.576775+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:51.576888+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:52.577181+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:53.577451+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:54.577682+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:55.577930+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:56.578126+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:57.578321+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:58.578500+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:59.578658+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:00.578810+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:01.578955+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:02.579292+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:03.579916+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:04.580516+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:05.581677+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:06.581910+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:07.582100+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:08.582490+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:09.582780+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:10.582985+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:11.583396+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:12.583655+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:13.583980+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 26558464 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:14.584216+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 26558464 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:15.610026+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 26558464 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:16.610436+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 26558464 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:17.610668+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 26558464 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f25d65a40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e44000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e44000 session 0x563f270850e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f25a5f4a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:18.610875+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25a5fa40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.391620636s of 39.564292908s, submitted: 35
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 26288128 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f23e070e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f25d9be00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e44400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e44400 session 0x563f24064b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f271d9680
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25de2d20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:19.611320+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 26279936 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:20.611547+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 26279936 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:21.611712+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 26279936 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:22.611951+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194344 data_alloc: 218103808 data_used: 7503872
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 26279936 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:23.612321+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 26279936 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:24.612660+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 26271744 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:25.612851+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 26271744 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:26.613044+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 26271744 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:27.613222+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194344 data_alloc: 218103808 data_used: 7503872
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 26271744 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:28.613481+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 26271744 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.724584579s of 10.777653694s, submitted: 13
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:29.613655+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111493120 unmapped: 26427392 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:30.613889+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111493120 unmapped: 26427392 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:31.614121+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111493120 unmapped: 26427392 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:32.614335+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198428 data_alloc: 218103808 data_used: 8060928
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 26419200 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:33.614602+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 26419200 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:34.614915+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 26419200 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:35.615070+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 26419200 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:36.615606+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:37.615771+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198428 data_alloc: 218103808 data_used: 8060928
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:38.615922+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:39.616043+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:40.616197+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:41.616338+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.636252403s of 12.641905785s, submitted: 1
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:42.616550+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267216 data_alloc: 218103808 data_used: 8052736
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113926144 unmapped: 23994368 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:43.616726+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 23904256 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:44.616899+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 23904256 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:45.617027+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 23904256 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1ab45d8/0x1b6d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:46.617220+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:47.617398+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270282 data_alloc: 218103808 data_used: 8052736
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1abc5d8/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:48.617543+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:49.617744+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:50.617903+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:51.618080+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1abc5d8/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:52.618478+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.704484940s of 10.629067421s, submitted: 52
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270298 data_alloc: 218103808 data_used: 8052736
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 25116672 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:53.618674+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 25116672 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:54.618911+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 25116672 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1abc5d8/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:55.619068+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:56.619266+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:57.619541+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270298 data_alloc: 218103808 data_used: 8052736
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:58.619694+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:59.619879+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1abc5d8/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:00.620059+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:01.620302+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:02.620559+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270298 data_alloc: 218103808 data_used: 8052736
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1abc5d8/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:03.620710+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 25100288 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:04.620851+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.854027748s of 11.854028702s, submitted: 0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26d57860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 25272320 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:05.621042+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 25272320 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:06.621288+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 25272320 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:07.621502+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 25272320 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:08.622386+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 25272320 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:09.622549+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f26b525a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:10.622864+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:11.623276+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:12.623744+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:13.623963+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:14.624193+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:15.624321+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:16.624618+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:17.624817+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:18.625119+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:19.625291+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:20.625630+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:21.625837+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:22.626220+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:23.626528+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:24.626750+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:25.626928+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:26.627102+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:27.627319+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:28.627570+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:29.627750+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:30.627950+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:31.628090+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:32.628479+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:33.628662+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:34.628886+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:35.629327+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:36.629502+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:37.629650+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:38.629895+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:39.630077+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:40.630334+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:41.630545+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 25280512 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:42.630764+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 25280512 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:43.630895+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 25280512 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:44.631103+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 25280512 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:45.631304+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e44800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.238075256s of 40.941547394s, submitted: 27
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e44800 session 0x563f24065c20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:46.631506+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:47.632215+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235618 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:48.632458+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:49.632669+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f240652c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:50.632865+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9b07000/0x0/0x4ffc00000, data 0x169d5c8/0x1755000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f23ddba40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:51.633054+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9b07000/0x0/0x4ffc00000, data 0x169d5c8/0x1755000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:52.633223+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235618 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9b07000/0x0/0x4ffc00000, data 0x169d5c8/0x1755000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:53.633448+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26b325a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f26fd6b40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 24821760 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:54.633606+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e44c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 24821760 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e45000
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:55.633797+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:56.633997+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:57.634147+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278310 data_alloc: 234881024 data_used: 13369344
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:58.634311+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ae3000/0x0/0x4ffc00000, data 0x16c15c8/0x1779000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:59.634473+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ae3000/0x0/0x4ffc00000, data 0x16c15c8/0x1779000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:00.634642+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:01.634849+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:02.635094+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278310 data_alloc: 234881024 data_used: 13369344
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ae3000/0x0/0x4ffc00000, data 0x16c15c8/0x1779000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:03.635276+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:04.635435+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:05.635548+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:06.635712+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.261785507s of 21.314563751s, submitted: 7
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ae3000/0x0/0x4ffc00000, data 0x16c15c8/0x1779000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 19816448 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:07.635860+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f91c1000/0x0/0x4ffc00000, data 0x1fe35c8/0x209b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348486 data_alloc: 234881024 data_used: 13598720
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 20684800 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 98 op/s
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:08.636057+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 20480000 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:09.636372+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117522432 unmapped: 20398080 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:10.636518+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e45c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 22855680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e45c00 session 0x563f272090e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:11.637377+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8ad0000/0x0/0x4ffc00000, data 0x26d45c8/0x278c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 22855680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:12.637625+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401384 data_alloc: 234881024 data_used: 13598720
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 22855680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:13.637781+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 22847488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:14.638037+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:15.638294+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 22847488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8ad0000/0x0/0x4ffc00000, data 0x26d45c8/0x278c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:16.638548+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 22839296 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:17.638771+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 22839296 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402104 data_alloc: 234881024 data_used: 13598720
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8acd000/0x0/0x4ffc00000, data 0x26d75c8/0x278f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:18.639008+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 22839296 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:19.639305+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 22839296 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.501544952s of 12.911491394s, submitted: 75
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f27084780
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:20.639557+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118775808 unmapped: 22822912 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:21.639876+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 121044992 unmapped: 20553728 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:22.640224+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8acd000/0x0/0x4ffc00000, data 0x26d75c8/0x278f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451237 data_alloc: 234881024 data_used: 20717568
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:23.640545+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8acd000/0x0/0x4ffc00000, data 0x26d75c8/0x278f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:24.640764+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:25.640965+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8acd000/0x0/0x4ffc00000, data 0x26d75c8/0x278f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:26.641171+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:27.641404+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451237 data_alloc: 234881024 data_used: 20717568
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:28.641582+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:29.641841+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:30.642064+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:31.642313+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8acd000/0x0/0x4ffc00000, data 0x26d75c8/0x278f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.388243675s of 12.616801262s, submitted: 6
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:32.642470+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124977152 unmapped: 16621568 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472753 data_alloc: 234881024 data_used: 20783104
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:33.642700+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:34.642958+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:35.643140+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:36.643378+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f6000/0x0/0x4ffc00000, data 0x29ae5c8/0x2a66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:37.643649+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476853 data_alloc: 234881024 data_used: 20783104
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:38.643830+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:39.644104+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 16449536 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:40.644328+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 16449536 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:41.644557+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:42.644902+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.793564320s of 10.690699577s, submitted: 32
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476061 data_alloc: 234881024 data_used: 20783104
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:43.645057+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:44.645390+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:45.645555+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:46.646198+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:47.647116+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476061 data_alloc: 234881024 data_used: 20783104
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:48.647953+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:49.648671+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:50.648856+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:51.649327+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:52.649933+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476061 data_alloc: 234881024 data_used: 20783104
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:53.650393+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 17203200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:54.650848+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 17203200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:55.651080+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:56.651381+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:57.651725+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:58.652027+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476061 data_alloc: 234881024 data_used: 20783104
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:59.652391+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:00.652665+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:01.652937+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:02.653263+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:03.653509+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476061 data_alloc: 234881024 data_used: 20783104
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f2615af00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.738149643s of 20.738151550s, submitted: 0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f26b32780
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:04.654038+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 17186816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:05.654383+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 17186816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:06.654518+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f271d9e00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 21594112 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:07.655281+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 21594112 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:08.655661+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360097 data_alloc: 234881024 data_used: 13598720
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9197000/0x0/0x4ffc00000, data 0x200d5c8/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 21594112 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:09.655899+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e45000 session 0x563f26b53a40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e44c00 session 0x563f26fd72c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 21594112 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26b32f00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:10.656385+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:11.656635+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:12.656844+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:13.657031+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:14.657172+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:15.657323+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:16.658210+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:17.658439+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:18.658639+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:19.659067+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:20.659313+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:21.659498+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:22.660396+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:23.660552+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:24.660931+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:25.661223+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:26.661491+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:27.661775+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:28.662382+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:29.663022+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:30.663846+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:31.664158+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:32.664675+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:33.665203+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:34.665664+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:35.665876+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:36.666163+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:37.666432+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:38.666693+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f26fd7c20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26d572c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f23ddba40
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26b323c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.104686737s of 35.593875885s, submitted: 33
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:39.666977+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2707c5a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f25db43c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e44c00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e44c00 session 0x563f2701e1e0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f284ee400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f284ee400 session 0x563f26b53e00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f2701f4a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:40.667203+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:41.667453+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f284ef800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f284ef800 session 0x563f24d21860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:42.667714+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:43.668026+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f284efc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f284efc00 session 0x563f27085860
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227580 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:44.668339+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f270852c0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f23dda5a0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:45.668562+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:46.668876+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:47.669105+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:48.669440+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236984 data_alloc: 218103808 data_used: 8843264
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 25747456 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:49.669691+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:50.669907+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:51.670087+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:52.670312+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:53.670523+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236984 data_alloc: 218103808 data_used: 8843264
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:54.670768+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:55.671060+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:56.671329+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:57.671506+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:58.671660+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236984 data_alloc: 218103808 data_used: 8843264
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.522546768s of 19.943984985s, submitted: 20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:59.671855+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118030336 unmapped: 23568384 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:00.672035+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 22528000 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:01.672554+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 21397504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:02.672830+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120209408 unmapped: 21389312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x182c5c8/0x18e4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,4,2])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:03.672985+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292212 data_alloc: 234881024 data_used: 9801728
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:04.673185+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:05.673340+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:06.673619+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.9 total, 600.0 interval
                                           Cumulative writes: 12K writes, 44K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3681 syncs, 3.41 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2310 writes, 7161 keys, 2310 commit groups, 1.0 writes per commit group, ingest: 7.02 MB, 0.01 MB/s
                                           Interval WAL: 2310 writes, 1005 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:07.673820+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:08.674069+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292380 data_alloc: 234881024 data_used: 9805824
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:09.674332+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.421975136s of 10.311377525s, submitted: 62
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:10.674570+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:11.674764+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:12.675004+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:13.675202+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292396 data_alloc: 234881024 data_used: 9805824
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:14.675450+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:15.675650+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:16.675818+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:17.675999+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:18.676328+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292396 data_alloc: 234881024 data_used: 9805824
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f25db2d20
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 22798336 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:19.676489+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2707c960
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:20.676718+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:21.676985+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:22.677298+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:23.677524+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:24.677668+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:25.677861+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:26.678053+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:27.678188+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:28.678346+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:29.678512+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:30.678682+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:31.678865+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:32.679083+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:33.679241+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:34.679449+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:35.679672+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:36.679857+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:37.680022+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:38.680222+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:39.680453+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:40.680665+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:41.680891+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:42.681119+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:43.681324+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:44.681497+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:45.681754+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:46.681928+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:47.682117+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:48.682313+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:49.682499+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:50.682651+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:51.682822+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:52.683055+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:53.683250+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:54.683429+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:55.683607+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:56.683881+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:57.684151+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:58.684375+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:59.684630+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:00.684868+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:01.685028+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:02.685261+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:03.685418+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:04.685588+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:05.685807+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:06.686148+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:07.686455+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:08.686693+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:09.686902+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:10.687133+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:11.687316+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:12.687633+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:13.687816+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:14.688025+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:15.688222+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:16.688481+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:17.688644+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:18.688858+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:19.689013+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:20.689189+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:21.689387+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:22.689648+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:23.689818+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:24.689995+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:25.690169+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:26.690384+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:27.690522+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:28.690694+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:29.690864+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:30.691001+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:31.691136+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:32.691384+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:33.691659+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:34.691841+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:35.691968+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:36.692143+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:37.692457+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:38.692620+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:39.692773+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:40.692943+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:41.693111+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:42.693298+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:43.693440+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:44.693577+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:45.693780+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:46.693986+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:47.694159+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:48.694315+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:49.694571+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:50.694760+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:51.694991+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:52.695196+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:53.695348+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:54.695510+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 105.019371033s of 105.452400208s, submitted: 28
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:55.695642+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24412160 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:56.695779+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24412160 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:57.695921+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1,1])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:58.696082+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:59.696265+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 24281088 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:00.696401+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:01.696549+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 24281088 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: do_command 'config diff' '{prefix=config diff}'
Dec 05 10:27:35 compute-0 ceph-osd[82677]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 10:27:35 compute-0 ceph-osd[82677]: do_command 'config show' '{prefix=config show}'
Dec 05 10:27:35 compute-0 ceph-osd[82677]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 10:27:35 compute-0 ceph-osd[82677]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 10:27:35 compute-0 ceph-osd[82677]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:02.696760+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:27:35 compute-0 ceph-osd[82677]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 10:27:35 compute-0 ceph-osd[82677]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24526848 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:03.696926+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24805376 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:27:35 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:27:35 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:27:35 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:04.697104+0000)
Dec 05 10:27:35 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24494080 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:27:35 compute-0 ceph-osd[82677]: do_command 'log dump' '{prefix=log dump}'
Dec 05 10:27:35 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16437 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 05 10:27:35 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/811959183' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26315 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16452 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.16386 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.26668 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.26261 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.16404 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/93246847' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.26680 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1620226128' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.26276 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4062701696' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.16416 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2275662628' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4131505438' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2810760022' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/811959183' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/822688499' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 10:27:36 compute-0 sudo[281179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:27:36 compute-0 sudo[281179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:36 compute-0 sudo[281179]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:36.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 05 10:27:36 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026965212' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26330 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:36 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16470 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:36.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 10:27:37 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433106751' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16491 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.26707 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.26303 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 98 op/s
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.16437 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.26315 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.16452 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2859526763' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2717752638' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1570668120' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3026965212' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1968492118' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2502571114' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1056000572' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1874277918' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1433106751' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/562407657' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:37.490Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:37 compute-0 nova_compute[257087]: 2025-12-05 10:27:37.517 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 05 10:27:37 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4119970530' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:27:37 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16503 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:37 compute-0 crontab[281382]: (root) LIST (root)
Dec 05 10:27:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 98 op/s
Dec 05 10:27:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:38 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16518 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 05 10:27:38 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958408918' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 10:27:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:38.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:38 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26390 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:38.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.26330 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.16470 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.16491 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3912650591' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2465957474' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4119970530' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.16503 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 98 op/s
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1423199612' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3034883199' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3297777500' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1958645482' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.16518 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1958408918' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3376772651' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/335276700' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4186424797' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/101859568' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 10:27:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:38.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:39 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26845 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16539 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 05 10:27:39 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4219939554' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26866 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26869 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 05 10:27:39 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/240204466' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 05 10:27:39 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/65328838' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 0 B/s wr, 98 op/s
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.26390 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1569283451' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2766982921' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.26845 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/409208528' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.16539 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4219939554' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/437192142' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/468663819' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/240204466' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/354809457' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/65328838' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 10:27:39 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26890 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 05 10:27:40 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3268326352' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26905 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 05 10:27:40 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315009104' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:40.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:40 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26462 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26917 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 05 10:27:40 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919092670' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 10:27:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:40.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 05 10:27:40 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2846097266' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: from='client.26866 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: from='client.26869 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 0 B/s wr, 98 op/s
Dec 05 10:27:40 compute-0 ceph-mon[74418]: from='client.26890 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/490217759' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2407989269' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3268326352' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: from='client.26905 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1315009104' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2763247992' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/839773824' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1919092670' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26477 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:40 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 05 10:27:40 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857307852' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26483 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26495 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 05 10:27:41 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3635472287' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26941 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 05 10:27:41 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3630481189' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:41 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26513 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 55 op/s
Dec 05 10:27:41 compute-0 ceph-mon[74418]: from='client.26462 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: from='client.26917 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2846097266' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: from='client.26477 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1857307852' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4223828606' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: from='client.26483 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: from='client.26495 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3635472287' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3630481189' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3360643847' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26956 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 05 10:27:41 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3098804496' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 10:27:41 compute-0 systemd[1]: Starting Hostname Service...
Dec 05 10:27:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 05 10:27:41 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2975896445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 10:27:42 compute-0 systemd[1]: Started Hostname Service.
Dec 05 10:27:42 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26531 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:42 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26974 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:42.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 05 10:27:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2973486229' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 05 10:27:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/544888471' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 nova_compute[257087]: 2025-12-05 10:27:42.519 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:42 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26549 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:27:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:27:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:27:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:42.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.26941 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.26513 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 55 op/s
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.26956 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3098804496' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2975896445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2305194484' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.26531 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.26974 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2607880857' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2973486229' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/544888471' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3011094949' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 05 10:27:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576239001' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 05 10:27:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672396338' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 10:27:42 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26570 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:43 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16683 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:43 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16680 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:43 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26603 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 55 op/s
Dec 05 10:27:43 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16692 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:43.779Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:43 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27052 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:43 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16701 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:44 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='client.26549 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2576239001' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2672396338' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='client.26570 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1019565532' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='client.16683 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2707077225' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 05 10:27:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1901747528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 10:27:44 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16716 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:44.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:44.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:44 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26657 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 05 10:27:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3281214319' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='client.16680 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='client.26603 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 55 op/s
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='client.16692 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='client.27052 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='client.16701 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='client.16716 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2572532145' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4195446778' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 10:27:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:45] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:27:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:45] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:27:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:45 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16749 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec 05 10:27:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1290825856' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16767 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27112 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26720 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:46.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='client.26657 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/281446043' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3281214319' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='client.16749 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2737736751' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1460225640' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1290825856' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='client.16767 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='client.27112 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: from='client.26720 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 05 10:27:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694401334' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:27:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:46.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:46 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16782 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 05 10:27:47 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227182942' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec 05 10:27:47 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2922700375' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:47.494Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:47 compute-0 nova_compute[257087]: 2025-12-05 10:27:47.523 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:47 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27142 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1694401334' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4198885059' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: from='client.16782 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1592893461' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4227182942' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/973306642' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2922700375' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:47 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 05 10:27:48 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/866238070' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 05 10:27:48 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27178 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='client.27142 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:48 compute-0 ceph-mon[74418]: pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1569167030' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2276596805' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:27:48 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/866238070' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 05 10:27:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:48.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:27:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:48.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:49 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16860 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:49 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27187 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:49 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26795 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 05 10:27:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2760527904' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 10:27:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec 05 10:27:50 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2370346740' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 10:27:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:50.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 05 10:27:50 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862403798' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 05 10:27:50 compute-0 ceph-mon[74418]: from='client.27178 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:50 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1630273001' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 05 10:27:50 compute-0 ceph-mon[74418]: from='client.16860 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:50 compute-0 ceph-mon[74418]: from='client.27187 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:50 compute-0 ceph-mon[74418]: from='client.26795 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:50 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2786661124' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 05 10:27:50 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2760527904' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 10:27:50 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2576335940' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 05 10:27:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 05 10:27:51 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699243362' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 05 10:27:51 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27217 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:51 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26828 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:51 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27223 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16896 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:52 compute-0 ceph-mon[74418]: pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2370346740' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 10:27:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1477103892' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 05 10:27:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1145580379' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 05 10:27:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1862403798' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 05 10:27:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3699243362' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 05 10:27:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:52.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:52 compute-0 nova_compute[257087]: 2025-12-05 10:27:52.525 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:52 compute-0 nova_compute[257087]: 2025-12-05 10:27:52.527 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:52.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:52 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26846 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 05 10:27:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3728682093' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 05 10:27:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:53 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27244 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mon[74418]: from='client.27217 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mon[74418]: pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:53 compute-0 ceph-mon[74418]: from='client.26828 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mon[74418]: from='client.27223 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2824401247' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mon[74418]: from='client.16896 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3958619009' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2521916619' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3728682093' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26852 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 05 10:27:53 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2631370955' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27253 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:53.780Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:53 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16926 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:54.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16941 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 05 10:27:54 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1004019440' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 05 10:27:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:27:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:54.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26894 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:54 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:27:55 compute-0 ceph-mon[74418]: from='client.26846 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:55 compute-0 ceph-mon[74418]: from='client.27244 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:55 compute-0 ceph-mon[74418]: from='client.26852 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2631370955' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 05 10:27:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2128788511' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 05 10:27:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2615336732' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:27:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/184335658' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 05 10:27:55 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16950 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:55 compute-0 ovs-appctl[283798]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 05 10:27:55 compute-0 ovs-appctl[283810]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 05 10:27:55 compute-0 ovs-appctl[283815]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 05 10:27:55 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27286 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:55 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16959 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:55] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:27:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:27:55] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:27:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec 05 10:27:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3610450680' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 05 10:27:55 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26927 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.27253 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.16926 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.16941 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4034350416' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1004019440' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.26894 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1435150209' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.16950 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/20824485' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4106523688' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/186542332' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3610450680' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 05 10:27:56 compute-0 sudo[284152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:27:56 compute-0 sudo[284152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:27:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:27:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:56.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:27:56 compute-0 sudo[284152]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec 05 10:27:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2639872881' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26933 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:27:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:56.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:27:56 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16980 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.16989 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:27:57 compute-0 ceph-mon[74418]: from='client.27286 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mon[74418]: from='client.16959 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mon[74418]: pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:57 compute-0 ceph-mon[74418]: from='client.26927 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3808309741' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2639872881' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1087401947' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/844081254' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/4171063904' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/4171063904' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:27:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:57.495Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:57 compute-0 nova_compute[257087]: 2025-12-05 10:27:57.529 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:27:57 compute-0 nova_compute[257087]: 2025-12-05 10:27:57.529 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:57 compute-0 nova_compute[257087]: 2025-12-05 10:27:57.530 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:27:57 compute-0 nova_compute[257087]: 2025-12-05 10:27:57.530 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:27:57 compute-0 nova_compute[257087]: 2025-12-05 10:27:57.530 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:27:57 compute-0 nova_compute[257087]: 2025-12-05 10:27:57.531 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:27:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Dec 05 10:27:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/368929412' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27337 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:27:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:27:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:27:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:27:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:27:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:27:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:27:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Dec 05 10:27:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968722536' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 05 10:27:58 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.26972 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:27:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:27:58.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:27:58 compute-0 ceph-mon[74418]: from='client.26933 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:58 compute-0 ceph-mon[74418]: from='client.16980 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:58 compute-0 ceph-mon[74418]: from='client.16989 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/270312649' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 05 10:27:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1452167836' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 05 10:27:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:27:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/368929412' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 05 10:27:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1777715593' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 05 10:27:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3968722536' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 05 10:27:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2940039424' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 05 10:27:58 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17031 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:27:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:27:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:27:58.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:27:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:27:58.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:27:59 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17037 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:27:59 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27361 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:27:59 compute-0 nova_compute[257087]: 2025-12-05 10:27:59.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:59 compute-0 nova_compute[257087]: 2025-12-05 10:27:59.530 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 10:27:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:28:00 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27017 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 05 10:28:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:00.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 05 10:28:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:00.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:01 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27035 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec 05 10:28:01 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2449197080' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:28:02 compute-0 ceph-mon[74418]: from='client.27337 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:02 compute-0 ceph-mon[74418]: pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:02 compute-0 ceph-mon[74418]: from='client.26972 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1001305063' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:28:02 compute-0 ceph-mon[74418]: from='client.17031 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:28:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2573540111' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:02 compute-0 ceph-mon[74418]: from='client.17037 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:28:02 compute-0 ceph-mon[74418]: from='client.27361 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1150379385' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 05 10:28:02 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17049 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:28:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:02.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:28:02 compute-0 nova_compute[257087]: 2025-12-05 10:28:02.397 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Dec 05 10:28:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548646777' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 05 10:28:02 compute-0 nova_compute[257087]: 2025-12-05 10:28:02.532 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:28:02 compute-0 nova_compute[257087]: 2025-12-05 10:28:02.534 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:28:02 compute-0 nova_compute[257087]: 2025-12-05 10:28:02.534 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:28:02 compute-0 nova_compute[257087]: 2025-12-05 10:28:02.535 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:28:02 compute-0 nova_compute[257087]: 2025-12-05 10:28:02.565 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:02 compute-0 nova_compute[257087]: 2025-12-05 10:28:02.567 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:28:02 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27388 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:02 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27050 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:28:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:02.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:28:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Dec 05 10:28:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3028389803' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:02 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27062 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:03 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17085 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:28:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:03.783Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:28:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:03.784Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:28:03 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27409 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27415 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:04 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:28:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:04.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Dec 05 10:28:04 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4129273587' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1833040003' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4197703298' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4015036070' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.27017 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3171859627' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/672506274' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.27035 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2449197080' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3417311010' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.17049 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/548646777' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3028389803' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1957321237' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:04.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:04 compute-0 podman[285663]: 2025-12-05 10:28:04.878569336 +0000 UTC m=+0.089395141 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Dec 05 10:28:04 compute-0 podman[285661]: 2025-12-05 10:28:04.960077922 +0000 UTC m=+0.169687204 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:28:04 compute-0 podman[285662]: 2025-12-05 10:28:04.976894979 +0000 UTC m=+0.182295787 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27095 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Dec 05 10:28:05 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3496930579' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27433 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27101 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:28:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:05] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:05] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Dec 05 10:28:05 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190699490' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:05 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27442 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.27388 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.27050 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.27062 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1315397461' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/269153205' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.17085 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.27409 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.27415 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4129273587' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1047340020' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2718309034' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/415770678' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.27095 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3496930579' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Dec 05 10:28:06 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1312610312' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:06.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:06 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17133 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:06.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:06 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27128 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-mon[74418]: from='client.27433 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-mon[74418]: from='client.27101 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-mon[74418]: pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/190699490' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-mon[74418]: from='client.27442 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3017157951' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1312610312' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/523861079' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3503782246' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1430084301' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Dec 05 10:28:07 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659449231' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27137 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:07.497Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:07 compute-0 nova_compute[257087]: 2025-12-05 10:28:07.568 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Dec 05 10:28:07 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543596169' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:08 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17160 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:08 compute-0 ceph-mon[74418]: from='client.17133 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:08 compute-0 ceph-mon[74418]: from='client.27128 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:08 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1659449231' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 05 10:28:08 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3543596169' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:08 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1469412761' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:08.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:08 compute-0 virtqemud[256610]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 05 10:28:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Dec 05 10:28:08 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3276633476' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 05 10:28:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:08.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:08.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:08 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17172 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:09 compute-0 systemd[1]: Starting Time & Date Service...
Dec 05 10:28:09 compute-0 ceph-mon[74418]: from='client.27137 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:09 compute-0 ceph-mon[74418]: pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:09 compute-0 ceph-mon[74418]: from='client.17160 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2658744645' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:09 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3276633476' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 05 10:28:09 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17178 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:09 compute-0 systemd[1]: Started Time & Date Service.
Dec 05 10:28:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:28:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Dec 05 10:28:09 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2639926973' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:10 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Dec 05 10:28:10 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111130787' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:10.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:10 compute-0 ceph-mon[74418]: from='client.17172 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:10 compute-0 ceph-mon[74418]: from='client.17178 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:10 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2639926973' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 05 10:28:10 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4111130787' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17196 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:10.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17202 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:10 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:28:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Dec 05 10:28:11 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3432274958' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:28:11 compute-0 ceph-mon[74418]: pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:28:11 compute-0 ceph-mon[74418]: from='client.17196 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:11 compute-0 ceph-mon[74418]: from='client.17202 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:11 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3432274958' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:28:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Dec 05 10:28:11 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3994305560' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 05 10:28:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:12.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:12 compute-0 nova_compute[257087]: 2025-12-05 10:28:12.571 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:28:12 compute-0 nova_compute[257087]: 2025-12-05 10:28:12.573 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:28:12 compute-0 nova_compute[257087]: 2025-12-05 10:28:12.574 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:28:12 compute-0 nova_compute[257087]: 2025-12-05 10:28:12.574 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:28:12 compute-0 nova_compute[257087]: 2025-12-05 10:28:12.607 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:12 compute-0 nova_compute[257087]: 2025-12-05 10:28:12.608 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:28:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:12.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:28:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:28:12 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec 05 10:28:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:12.977024) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:28:12 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec 05 10:28:12 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930492977199, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1391, "num_deletes": 508, "total_data_size": 1447362, "memory_usage": 1475448, "flush_reason": "Manual Compaction"}
Dec 05 10:28:12 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec 05 10:28:12 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930492997344, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1093109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32962, "largest_seqno": 34351, "table_properties": {"data_size": 1086571, "index_size": 2781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 23108, "raw_average_key_size": 21, "raw_value_size": 1069645, "raw_average_value_size": 1007, "num_data_blocks": 118, "num_entries": 1062, "num_filter_entries": 1062, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930436, "oldest_key_time": 1764930436, "file_creation_time": 1764930492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:28:12 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 20598 microseconds, and 5766 cpu microseconds.
Dec 05 10:28:12 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:28:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:12.997627) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1093109 bytes OK
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:12.997707) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.010397) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.010467) EVENT_LOG_v1 {"time_micros": 1764930493010452, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.010548) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1438929, prev total WAL file size 1457229, number of live WAL files 2.
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.012217) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1067KB)], [71(14MB)]
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930493012436, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 16759566, "oldest_snapshot_seqno": -1}
Dec 05 10:28:13 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1382550123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:28:13 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3994305560' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 05 10:28:13 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17220 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6539 keys, 12802099 bytes, temperature: kUnknown
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930493184631, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 12802099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12760773, "index_size": 23870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 172486, "raw_average_key_size": 26, "raw_value_size": 12645220, "raw_average_value_size": 1933, "num_data_blocks": 938, "num_entries": 6539, "num_filter_entries": 6539, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764930493, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.185067) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 12802099 bytes
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.187288) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.2 rd, 74.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 14.9 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(27.0) write-amplify(11.7) OK, records in: 7548, records dropped: 1009 output_compression: NoCompression
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.187310) EVENT_LOG_v1 {"time_micros": 1764930493187299, "job": 40, "event": "compaction_finished", "compaction_time_micros": 172355, "compaction_time_cpu_micros": 62661, "output_level": 6, "num_output_files": 1, "total_output_size": 12802099, "num_input_records": 7548, "num_output_records": 6539, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930493187715, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930493191452, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.011957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.191553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.191561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.191563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.191565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:13 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:28:13.191567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:13 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17232 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:28:13 compute-0 nova_compute[257087]: 2025-12-05 10:28:13.713 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:13 compute-0 nova_compute[257087]: 2025-12-05 10:28:13.714 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:13.785Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:13 compute-0 nova_compute[257087]: 2025-12-05 10:28:13.842 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:13 compute-0 nova_compute[257087]: 2025-12-05 10:28:13.842 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:13 compute-0 nova_compute[257087]: 2025-12-05 10:28:13.842 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:13 compute-0 nova_compute[257087]: 2025-12-05 10:28:13.843 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:13 compute-0 nova_compute[257087]: 2025-12-05 10:28:13.843 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:13 compute-0 nova_compute[257087]: 2025-12-05 10:28:13.843 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:28:13 compute-0 nova_compute[257087]: 2025-12-05 10:28:13.843 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 05 10:28:14 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783081292' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:14 compute-0 ceph-mon[74418]: pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:28:14 compute-0 ceph-mon[74418]: from='client.17220 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4196844647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:28:14 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4059439512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:28:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:14.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:14 compute-0 nova_compute[257087]: 2025-12-05 10:28:14.440 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:28:14 compute-0 nova_compute[257087]: 2025-12-05 10:28:14.441 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:28:14 compute-0 nova_compute[257087]: 2025-12-05 10:28:14.441 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:28:14 compute-0 nova_compute[257087]: 2025-12-05 10:28:14.441 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:28:14 compute-0 nova_compute[257087]: 2025-12-05 10:28:14.442 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:28:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Dec 05 10:28:14 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261941520' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:14.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:28:14 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/971553058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:28:14 compute-0 nova_compute[257087]: 2025-12-05 10:28:14.944 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:28:15 compute-0 nova_compute[257087]: 2025-12-05 10:28:15.110 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:28:15 compute-0 nova_compute[257087]: 2025-12-05 10:28:15.112 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4348MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:28:15 compute-0 nova_compute[257087]: 2025-12-05 10:28:15.113 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:28:15 compute-0 nova_compute[257087]: 2025-12-05 10:28:15.113 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:28:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:15] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:28:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:15] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:28:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:16.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:16 compute-0 sudo[286664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:28:16 compute-0 sudo[286664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:16 compute-0 sudo[286664]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:16.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.070 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.072 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.098 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing inventories for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.211 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Updating ProviderTree inventory for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.212 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Updating inventory in ProviderTree for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.240 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing aggregate associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.272 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing trait associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, traits: HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.300 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:28:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:17.498Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.608 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.610 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:28:17 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1775780705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.791 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.798 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:28:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.932 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.934 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.934 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.935 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:17 compute-0 nova_compute[257087]: 2025-12-05 10:28:17.935 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 10:28:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:18 compute-0 nova_compute[257087]: 2025-12-05 10:28:18.099 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 10:28:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:18.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:18 compute-0 ceph-mon[74418]: from='client.17232 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:28:18 compute-0 ceph-mon[74418]: pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:28:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3783081292' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2261941520' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 05 10:28:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/971553058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:28:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/452511592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:28:18 compute-0 nova_compute[257087]: 2025-12-05 10:28:18.787 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:18 compute-0 nova_compute[257087]: 2025-12-05 10:28:18.788 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:28:18 compute-0 nova_compute[257087]: 2025-12-05 10:28:18.788 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:28:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:18.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:18.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:19 compute-0 nova_compute[257087]: 2025-12-05 10:28:19.159 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:28:19 compute-0 nova_compute[257087]: 2025-12-05 10:28:19.160 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:19 compute-0 nova_compute[257087]: 2025-12-05 10:28:19.160 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:28:20 compute-0 sudo[286716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:28:20 compute-0 sudo[286716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:20 compute-0 sudo[286716]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:20 compute-0 sudo[286741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:28:20 compute-0 sudo[286741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:20.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:28:20.587 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:28:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:28:20.588 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:28:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:28:20.589 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:28:20 compute-0 sudo[286741]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:20.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:28:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:28:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:28:20 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:28:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:28:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:28:20 compute-0 ceph-mon[74418]: pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:20 compute-0 ceph-mon[74418]: pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:28:20 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1775780705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:28:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:22.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:22 compute-0 nova_compute[257087]: 2025-12-05 10:28:22.612 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:28:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:22.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:28:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:23.787Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:24.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 0 B/s wr, 8 op/s
Dec 05 10:28:25 compute-0 ceph-mds[96460]: mds.beacon.cephfs.compute-0.hfgtsk missed beacon ack from the monitors
Dec 05 10:28:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:25] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:28:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:25] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:28:26 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.314220428s, txc = 0x563f23f84f00
Dec 05 10:28:26 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 5.443528652s
Dec 05 10:28:26 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.443528652s
Dec 05 10:28:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:26 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.558694839s, txc = 0x563f24856000
Dec 05 10:28:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:26.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 0 B/s wr, 8 op/s
Dec 05 10:28:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:27.500Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:28:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:27.501Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:27 compute-0 nova_compute[257087]: 2025-12-05 10:28:27.615 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:28:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:28:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:28:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:28:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:28:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:28:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:28:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:28:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:28:27
Dec 05 10:28:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:28:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:28:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.rgw.root', 'volumes', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.meta']
Dec 05 10:28:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:28:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:28:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:28.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 11 op/s
Dec 05 10:28:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:28.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:28:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:28:29 compute-0 ceph-mon[74418]: pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:28:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:28:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:28:29 compute-0 ceph-mon[74418]: pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:28:29 compute-0 ceph-mon[74418]: pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 10:28:29 compute-0 ceph-mon[74418]: pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 0 B/s wr, 8 op/s
Dec 05 10:28:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:28:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:28:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:28:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:28:29 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:28:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:28:29 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:28:29 compute-0 sudo[286810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:28:29 compute-0 sudo[286810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:29 compute-0 sudo[286810]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:30 compute-0 sudo[286835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:28:30 compute-0 sudo[286835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:30 compute-0 podman[286902]: 2025-12-05 10:28:30.414987758 +0000 UTC m=+0.027797597 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:28:30 compute-0 podman[286902]: 2025-12-05 10:28:30.710610814 +0000 UTC m=+0.323420633 container create a8b2b135a37bc22308f90d0dd2003d261af85fa09dee15fbf7eade951020448c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:28:30 compute-0 ceph-mon[74418]: pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 0 B/s wr, 8 op/s
Dec 05 10:28:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:28:30 compute-0 ceph-mon[74418]: pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 11 op/s
Dec 05 10:28:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:28:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:28:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:28:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:28:30 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:28:30 compute-0 systemd[1]: Started libpod-conmon-a8b2b135a37bc22308f90d0dd2003d261af85fa09dee15fbf7eade951020448c.scope.
Dec 05 10:28:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:30.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 10:28:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:28:30 compute-0 podman[286902]: 2025-12-05 10:28:30.857816074 +0000 UTC m=+0.470625923 container init a8b2b135a37bc22308f90d0dd2003d261af85fa09dee15fbf7eade951020448c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:28:30 compute-0 podman[286902]: 2025-12-05 10:28:30.873319346 +0000 UTC m=+0.486129165 container start a8b2b135a37bc22308f90d0dd2003d261af85fa09dee15fbf7eade951020448c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:28:30 compute-0 podman[286902]: 2025-12-05 10:28:30.877732506 +0000 UTC m=+0.490542335 container attach a8b2b135a37bc22308f90d0dd2003d261af85fa09dee15fbf7eade951020448c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nightingale, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:28:30 compute-0 focused_nightingale[286920]: 167 167
Dec 05 10:28:30 compute-0 conmon[286920]: conmon a8b2b135a37bc22308f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8b2b135a37bc22308f90d0dd2003d261af85fa09dee15fbf7eade951020448c.scope/container/memory.events
Dec 05 10:28:30 compute-0 systemd[1]: libpod-a8b2b135a37bc22308f90d0dd2003d261af85fa09dee15fbf7eade951020448c.scope: Deactivated successfully.
Dec 05 10:28:30 compute-0 podman[286902]: 2025-12-05 10:28:30.888717914 +0000 UTC m=+0.501527723 container died a8b2b135a37bc22308f90d0dd2003d261af85fa09dee15fbf7eade951020448c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nightingale, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cca81661c44bfc9bdcbb7bd558abdbb442d627bd6f0db603e22d7093ca7cb7d-merged.mount: Deactivated successfully.
Dec 05 10:28:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:31 compute-0 podman[286902]: 2025-12-05 10:28:31.51546315 +0000 UTC m=+1.128272959 container remove a8b2b135a37bc22308f90d0dd2003d261af85fa09dee15fbf7eade951020448c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nightingale, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:28:31 compute-0 systemd[1]: libpod-conmon-a8b2b135a37bc22308f90d0dd2003d261af85fa09dee15fbf7eade951020448c.scope: Deactivated successfully.
Dec 05 10:28:31 compute-0 podman[286946]: 2025-12-05 10:28:31.688668088 +0000 UTC m=+0.028595878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:28:31 compute-0 podman[286946]: 2025-12-05 10:28:31.800837617 +0000 UTC m=+0.140765387 container create 6163d6b3f75877487e95e03364bd88c48da5d7a982331629596a7107486daa74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:28:32 compute-0 systemd[1]: Started libpod-conmon-6163d6b3f75877487e95e03364bd88c48da5d7a982331629596a7107486daa74.scope.
Dec 05 10:28:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1cf60b2fe4c07e18b6593ae698ba2be5877e5142720b31d727c93e95651474/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1cf60b2fe4c07e18b6593ae698ba2be5877e5142720b31d727c93e95651474/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1cf60b2fe4c07e18b6593ae698ba2be5877e5142720b31d727c93e95651474/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1cf60b2fe4c07e18b6593ae698ba2be5877e5142720b31d727c93e95651474/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1cf60b2fe4c07e18b6593ae698ba2be5877e5142720b31d727c93e95651474/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:32 compute-0 ceph-mon[74418]: pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 10:28:32 compute-0 podman[286946]: 2025-12-05 10:28:32.341536814 +0000 UTC m=+0.681464604 container init 6163d6b3f75877487e95e03364bd88c48da5d7a982331629596a7107486daa74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:28:32 compute-0 podman[286946]: 2025-12-05 10:28:32.350813455 +0000 UTC m=+0.690741225 container start 6163d6b3f75877487e95e03364bd88c48da5d7a982331629596a7107486daa74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:28:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:32.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:32 compute-0 podman[286946]: 2025-12-05 10:28:32.471715981 +0000 UTC m=+0.811643761 container attach 6163d6b3f75877487e95e03364bd88c48da5d7a982331629596a7107486daa74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:28:32 compute-0 nova_compute[257087]: 2025-12-05 10:28:32.618 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:28:32 compute-0 dazzling_cartwright[286964]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:28:32 compute-0 dazzling_cartwright[286964]: --> All data devices are unavailable
Dec 05 10:28:32 compute-0 systemd[1]: libpod-6163d6b3f75877487e95e03364bd88c48da5d7a982331629596a7107486daa74.scope: Deactivated successfully.
Dec 05 10:28:32 compute-0 podman[286946]: 2025-12-05 10:28:32.785544812 +0000 UTC m=+1.125472602 container died 6163d6b3f75877487e95e03364bd88c48da5d7a982331629596a7107486daa74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 10:28:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:28:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:32.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:28:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 0 B/s wr, 8 op/s
Dec 05 10:28:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed1cf60b2fe4c07e18b6593ae698ba2be5877e5142720b31d727c93e95651474-merged.mount: Deactivated successfully.
Dec 05 10:28:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:33.788Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:34.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:34.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 25 op/s
Dec 05 10:28:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:35] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Dec 05 10:28:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:35] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Dec 05 10:28:35 compute-0 podman[286946]: 2025-12-05 10:28:35.841728782 +0000 UTC m=+4.181656552 container remove 6163d6b3f75877487e95e03364bd88c48da5d7a982331629596a7107486daa74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 05 10:28:35 compute-0 ceph-mon[74418]: pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 0 B/s wr, 8 op/s
Dec 05 10:28:35 compute-0 sudo[286835]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:35 compute-0 systemd[1]: libpod-conmon-6163d6b3f75877487e95e03364bd88c48da5d7a982331629596a7107486daa74.scope: Deactivated successfully.
Dec 05 10:28:35 compute-0 podman[286997]: 2025-12-05 10:28:35.981286545 +0000 UTC m=+0.633733576 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:28:35 compute-0 podman[286999]: 2025-12-05 10:28:35.998125003 +0000 UTC m=+0.632132203 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 05 10:28:36 compute-0 sudo[287026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:28:36 compute-0 sudo[287026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:36 compute-0 sudo[287026]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:36 compute-0 podman[286998]: 2025-12-05 10:28:36.059299456 +0000 UTC m=+0.697798378 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 10:28:36 compute-0 sudo[287082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:28:36 compute-0 sudo[287082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:36.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:36 compute-0 sudo[287149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:28:36 compute-0 sudo[287149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:36 compute-0 sudo[287149]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:36 compute-0 podman[287152]: 2025-12-05 10:28:36.536740623 +0000 UTC m=+0.034408787 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:28:36 compute-0 podman[287152]: 2025-12-05 10:28:36.784544789 +0000 UTC m=+0.282212943 container create f3b93f9c0c9dafdf7466d6a02eeb7462def20ebf6fc99a64c918ad1972f44fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:28:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Dec 05 10:28:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:36.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:36 compute-0 systemd[1]: Started libpod-conmon-f3b93f9c0c9dafdf7466d6a02eeb7462def20ebf6fc99a64c918ad1972f44fcd.scope.
Dec 05 10:28:36 compute-0 ceph-mon[74418]: pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 25 op/s
Dec 05 10:28:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:28:37 compute-0 podman[287152]: 2025-12-05 10:28:37.028195931 +0000 UTC m=+0.525864105 container init f3b93f9c0c9dafdf7466d6a02eeb7462def20ebf6fc99a64c918ad1972f44fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 10:28:37 compute-0 podman[287152]: 2025-12-05 10:28:37.040023162 +0000 UTC m=+0.537691296 container start f3b93f9c0c9dafdf7466d6a02eeb7462def20ebf6fc99a64c918ad1972f44fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:28:37 compute-0 frosty_bohr[287190]: 167 167
Dec 05 10:28:37 compute-0 systemd[1]: libpod-f3b93f9c0c9dafdf7466d6a02eeb7462def20ebf6fc99a64c918ad1972f44fcd.scope: Deactivated successfully.
Dec 05 10:28:37 compute-0 podman[287152]: 2025-12-05 10:28:37.07780677 +0000 UTC m=+0.575474904 container attach f3b93f9c0c9dafdf7466d6a02eeb7462def20ebf6fc99a64c918ad1972f44fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 10:28:37 compute-0 podman[287152]: 2025-12-05 10:28:37.078403826 +0000 UTC m=+0.576071970 container died f3b93f9c0c9dafdf7466d6a02eeb7462def20ebf6fc99a64c918ad1972f44fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:28:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:37.502Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:28:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:37.503Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:28:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:37.504Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:37 compute-0 nova_compute[257087]: 2025-12-05 10:28:37.622 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bf1b96dbdd2e9824073788590795b3d7663dc8b17f688235140098c9d63b378-merged.mount: Deactivated successfully.
Dec 05 10:28:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:38 compute-0 ceph-mon[74418]: pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Dec 05 10:28:38 compute-0 podman[287152]: 2025-12-05 10:28:38.405822207 +0000 UTC m=+1.903490341 container remove f3b93f9c0c9dafdf7466d6a02eeb7462def20ebf6fc99a64c918ad1972f44fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:28:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:38.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:38 compute-0 systemd[1]: libpod-conmon-f3b93f9c0c9dafdf7466d6a02eeb7462def20ebf6fc99a64c918ad1972f44fcd.scope: Deactivated successfully.
Dec 05 10:28:38 compute-0 podman[287218]: 2025-12-05 10:28:38.606454619 +0000 UTC m=+0.060149075 container create c60b2a4a3c0b84befb526181da86a1e5fe1bd6b193f365013cb3e3aadabf4a8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 10:28:38 compute-0 systemd[1]: Started libpod-conmon-c60b2a4a3c0b84befb526181da86a1e5fe1bd6b193f365013cb3e3aadabf4a8f.scope.
Dec 05 10:28:38 compute-0 podman[287218]: 2025-12-05 10:28:38.578009516 +0000 UTC m=+0.031703992 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:28:38 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa5623a9a358da7d8373891c1f125d05b0a6c7b774b08296d941d82500aa419/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa5623a9a358da7d8373891c1f125d05b0a6c7b774b08296d941d82500aa419/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa5623a9a358da7d8373891c1f125d05b0a6c7b774b08296d941d82500aa419/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa5623a9a358da7d8373891c1f125d05b0a6c7b774b08296d941d82500aa419/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:38 compute-0 podman[287218]: 2025-12-05 10:28:38.826000367 +0000 UTC m=+0.279694843 container init c60b2a4a3c0b84befb526181da86a1e5fe1bd6b193f365013cb3e3aadabf4a8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:28:38 compute-0 podman[287218]: 2025-12-05 10:28:38.834409156 +0000 UTC m=+0.288103612 container start c60b2a4a3c0b84befb526181da86a1e5fe1bd6b193f365013cb3e3aadabf4a8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_jennings, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 05 10:28:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 30 op/s
Dec 05 10:28:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:38.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:38.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:28:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:38.891Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:28:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:38.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:39 compute-0 podman[287218]: 2025-12-05 10:28:39.024068141 +0000 UTC m=+0.477762607 container attach c60b2a4a3c0b84befb526181da86a1e5fe1bd6b193f365013cb3e3aadabf4a8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_jennings, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:28:39 compute-0 cool_jennings[287236]: {
Dec 05 10:28:39 compute-0 cool_jennings[287236]:     "1": [
Dec 05 10:28:39 compute-0 cool_jennings[287236]:         {
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             "devices": [
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "/dev/loop3"
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             ],
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             "lv_name": "ceph_lv0",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             "lv_size": "21470642176",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             "name": "ceph_lv0",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             "tags": {
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.cluster_name": "ceph",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.crush_device_class": "",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.encrypted": "0",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.osd_id": "1",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.type": "block",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.vdo": "0",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:                 "ceph.with_tpm": "0"
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             },
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             "type": "block",
Dec 05 10:28:39 compute-0 cool_jennings[287236]:             "vg_name": "ceph_vg0"
Dec 05 10:28:39 compute-0 cool_jennings[287236]:         }
Dec 05 10:28:39 compute-0 cool_jennings[287236]:     ]
Dec 05 10:28:39 compute-0 cool_jennings[287236]: }
Dec 05 10:28:39 compute-0 systemd[1]: libpod-c60b2a4a3c0b84befb526181da86a1e5fe1bd6b193f365013cb3e3aadabf4a8f.scope: Deactivated successfully.
Dec 05 10:28:39 compute-0 podman[287218]: 2025-12-05 10:28:39.169272227 +0000 UTC m=+0.622966693 container died c60b2a4a3c0b84befb526181da86a1e5fe1bd6b193f365013cb3e3aadabf4a8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 05 10:28:39 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 10:28:39 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 10:28:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:40.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Dec 05 10:28:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:28:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:40.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:28:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-baa5623a9a358da7d8373891c1f125d05b0a6c7b774b08296d941d82500aa419-merged.mount: Deactivated successfully.
Dec 05 10:28:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:42.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:42 compute-0 nova_compute[257087]: 2025-12-05 10:28:42.625 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:28:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:28:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Dec 05 10:28:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:42.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:43.789Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:28:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:43.790Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:44 compute-0 ceph-mon[74418]: pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 30 op/s
Dec 05 10:28:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:44.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Dec 05 10:28:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:44.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:45] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Dec 05 10:28:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:45] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Dec 05 10:28:45 compute-0 podman[287218]: 2025-12-05 10:28:45.812820405 +0000 UTC m=+7.266514861 container remove c60b2a4a3c0b84befb526181da86a1e5fe1bd6b193f365013cb3e3aadabf4a8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:28:45 compute-0 systemd[1]: libpod-conmon-c60b2a4a3c0b84befb526181da86a1e5fe1bd6b193f365013cb3e3aadabf4a8f.scope: Deactivated successfully.
Dec 05 10:28:45 compute-0 sudo[287082]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:45 compute-0 ceph-mon[74418]: pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Dec 05 10:28:45 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:28:45 compute-0 ceph-mon[74418]: pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Dec 05 10:28:45 compute-0 ceph-mon[74418]: pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Dec 05 10:28:45 compute-0 sudo[287269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:28:45 compute-0 sudo[287269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:45 compute-0 sudo[287269]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:46 compute-0 sudo[287294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:28:46 compute-0 sudo[287294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:46.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:46 compute-0 podman[287357]: 2025-12-05 10:28:46.500645141 +0000 UTC m=+0.023435758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:28:46 compute-0 podman[287357]: 2025-12-05 10:28:46.860351608 +0000 UTC m=+0.383142165 container create 07179e0285d0b9ad37b7eb4046009c131479ba90ee4ac7c76a16227fbf9bcdd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cray, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:28:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Dec 05 10:28:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:46.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:46 compute-0 systemd[1]: Started libpod-conmon-07179e0285d0b9ad37b7eb4046009c131479ba90ee4ac7c76a16227fbf9bcdd7.scope.
Dec 05 10:28:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:28:47 compute-0 podman[287357]: 2025-12-05 10:28:47.007918139 +0000 UTC m=+0.530708696 container init 07179e0285d0b9ad37b7eb4046009c131479ba90ee4ac7c76a16227fbf9bcdd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:28:47 compute-0 podman[287357]: 2025-12-05 10:28:47.018855016 +0000 UTC m=+0.541645583 container start 07179e0285d0b9ad37b7eb4046009c131479ba90ee4ac7c76a16227fbf9bcdd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:28:47 compute-0 optimistic_cray[287374]: 167 167
Dec 05 10:28:47 compute-0 systemd[1]: libpod-07179e0285d0b9ad37b7eb4046009c131479ba90ee4ac7c76a16227fbf9bcdd7.scope: Deactivated successfully.
Dec 05 10:28:47 compute-0 podman[287357]: 2025-12-05 10:28:47.03482008 +0000 UTC m=+0.557610637 container attach 07179e0285d0b9ad37b7eb4046009c131479ba90ee4ac7c76a16227fbf9bcdd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 05 10:28:47 compute-0 podman[287357]: 2025-12-05 10:28:47.037862713 +0000 UTC m=+0.560653270 container died 07179e0285d0b9ad37b7eb4046009c131479ba90ee4ac7c76a16227fbf9bcdd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 10:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-018706b6e7b8e9fbbaca5d4e2272a2ffaf122af8f30d3662277fec94051d9ba7-merged.mount: Deactivated successfully.
Dec 05 10:28:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:47.504Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:28:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:47.505Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:47 compute-0 nova_compute[257087]: 2025-12-05 10:28:47.627 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:48 compute-0 podman[287357]: 2025-12-05 10:28:48.053874189 +0000 UTC m=+1.576664746 container remove 07179e0285d0b9ad37b7eb4046009c131479ba90ee4ac7c76a16227fbf9bcdd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cray, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:28:48 compute-0 ceph-mon[74418]: pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Dec 05 10:28:48 compute-0 systemd[1]: libpod-conmon-07179e0285d0b9ad37b7eb4046009c131479ba90ee4ac7c76a16227fbf9bcdd7.scope: Deactivated successfully.
Dec 05 10:28:48 compute-0 podman[287398]: 2025-12-05 10:28:48.325166443 +0000 UTC m=+0.111267905 container create 9969d78fe744f19f8d13b0dbbb16e7ff4a91130eff798be271f3a9be15118c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lalande, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:28:48 compute-0 podman[287398]: 2025-12-05 10:28:48.245462326 +0000 UTC m=+0.031563808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:28:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:48.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:48 compute-0 systemd[1]: Started libpod-conmon-9969d78fe744f19f8d13b0dbbb16e7ff4a91130eff798be271f3a9be15118c75.scope.
Dec 05 10:28:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e2fab29a7c2ec70627e7d676451a7f2bd1a1419a15d53ffc9ae9e0502019b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e2fab29a7c2ec70627e7d676451a7f2bd1a1419a15d53ffc9ae9e0502019b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e2fab29a7c2ec70627e7d676451a7f2bd1a1419a15d53ffc9ae9e0502019b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e2fab29a7c2ec70627e7d676451a7f2bd1a1419a15d53ffc9ae9e0502019b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:28:48 compute-0 podman[287398]: 2025-12-05 10:28:48.719511631 +0000 UTC m=+0.505613123 container init 9969d78fe744f19f8d13b0dbbb16e7ff4a91130eff798be271f3a9be15118c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 10:28:48 compute-0 podman[287398]: 2025-12-05 10:28:48.730200122 +0000 UTC m=+0.516301594 container start 9969d78fe744f19f8d13b0dbbb16e7ff4a91130eff798be271f3a9be15118c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lalande, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 10:28:48 compute-0 podman[287398]: 2025-12-05 10:28:48.756009183 +0000 UTC m=+0.542110645 container attach 9969d78fe744f19f8d13b0dbbb16e7ff4a91130eff798be271f3a9be15118c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lalande, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:28:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Dec 05 10:28:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:48.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:48.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:49 compute-0 lvm[287492]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:28:49 compute-0 lvm[287492]: VG ceph_vg0 finished
Dec 05 10:28:49 compute-0 busy_lalande[287417]: {}
Dec 05 10:28:49 compute-0 systemd[1]: libpod-9969d78fe744f19f8d13b0dbbb16e7ff4a91130eff798be271f3a9be15118c75.scope: Deactivated successfully.
Dec 05 10:28:49 compute-0 systemd[1]: libpod-9969d78fe744f19f8d13b0dbbb16e7ff4a91130eff798be271f3a9be15118c75.scope: Consumed 1.334s CPU time.
Dec 05 10:28:49 compute-0 podman[287398]: 2025-12-05 10:28:49.533438305 +0000 UTC m=+1.319539767 container died 9969d78fe744f19f8d13b0dbbb16e7ff4a91130eff798be271f3a9be15118c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:28:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-49e2fab29a7c2ec70627e7d676451a7f2bd1a1419a15d53ffc9ae9e0502019b6-merged.mount: Deactivated successfully.
Dec 05 10:28:49 compute-0 podman[287398]: 2025-12-05 10:28:49.899362982 +0000 UTC m=+1.685464434 container remove 9969d78fe744f19f8d13b0dbbb16e7ff4a91130eff798be271f3a9be15118c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 05 10:28:49 compute-0 systemd[1]: libpod-conmon-9969d78fe744f19f8d13b0dbbb16e7ff4a91130eff798be271f3a9be15118c75.scope: Deactivated successfully.
Dec 05 10:28:49 compute-0 sudo[287294]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:28:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:28:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:28:50 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:28:50 compute-0 sudo[287507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:28:50 compute-0 sudo[287507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:50 compute-0 sudo[287507]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:50 compute-0 ceph-mon[74418]: pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Dec 05 10:28:50 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:28:50 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:28:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:28:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:50.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:28:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Dec 05 10:28:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:50.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:52 compute-0 ceph-mon[74418]: pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Dec 05 10:28:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:52.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:52 compute-0 nova_compute[257087]: 2025-12-05 10:28:52.629 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:28:52 compute-0 nova_compute[257087]: 2025-12-05 10:28:52.632 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:28:52 compute-0 nova_compute[257087]: 2025-12-05 10:28:52.632 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:28:52 compute-0 nova_compute[257087]: 2025-12-05 10:28:52.632 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:28:52 compute-0 nova_compute[257087]: 2025-12-05 10:28:52.665 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:52 compute-0 nova_compute[257087]: 2025-12-05 10:28:52.666 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:28:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Dec 05 10:28:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:52.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:53.791Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:28:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:53.792Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:28:54 compute-0 ceph-mon[74418]: pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Dec 05 10:28:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:54.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 25 op/s
Dec 05 10:28:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:54.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:55] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Dec 05 10:28:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:28:55] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Dec 05 10:28:55 compute-0 ceph-mon[74418]: pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 25 op/s
Dec 05 10:28:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:56.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:28:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:56 compute-0 sudo[287540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:28:56 compute-0 sudo[287540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:28:56 compute-0 sudo[287540]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 05 10:28:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:56.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:28:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1660042166' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:28:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:28:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1660042166' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:28:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:57.505Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:28:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:57.515Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:28:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:28:57 compute-0 nova_compute[257087]: 2025-12-05 10:28:57.666 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:28:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:28:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:28:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:28:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:28:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:28:57 compute-0 ceph-mon[74418]: pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 05 10:28:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1660042166' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:28:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1660042166' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:28:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:28:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:28:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:28:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:28:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:28:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:28:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:28:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:28:58.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:28:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 23 op/s
Dec 05 10:28:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:28:58.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:28:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:28:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:28:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:28:58.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:00.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:00 compute-0 ceph-mon[74418]: pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 23 op/s
Dec 05 10:29:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Dec 05 10:29:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:00.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:02 compute-0 ceph-mon[74418]: pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Dec 05 10:29:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:02.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:02 compute-0 nova_compute[257087]: 2025-12-05 10:29:02.668 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:29:02 compute-0 nova_compute[257087]: 2025-12-05 10:29:02.669 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:02 compute-0 nova_compute[257087]: 2025-12-05 10:29:02.669 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:29:02 compute-0 nova_compute[257087]: 2025-12-05 10:29:02.669 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:29:02 compute-0 nova_compute[257087]: 2025-12-05 10:29:02.670 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:29:02 compute-0 nova_compute[257087]: 2025-12-05 10:29:02.671 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Dec 05 10:29:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:02.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:03 compute-0 ceph-mon[74418]: pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Dec 05 10:29:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:03.793Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:04.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 4 op/s
Dec 05 10:29:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:04.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:05] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:29:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:05] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:29:05 compute-0 ceph-mon[74418]: pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 4 op/s
Dec 05 10:29:06 compute-0 podman[287573]: 2025-12-05 10:29:06.418173767 +0000 UTC m=+0.062836139 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec 05 10:29:06 compute-0 podman[287575]: 2025-12-05 10:29:06.42193524 +0000 UTC m=+0.065325487 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:29:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:06.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:06 compute-0 podman[287574]: 2025-12-05 10:29:06.475960098 +0000 UTC m=+0.121035691 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:29:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:06.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:07.517Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:07 compute-0 nova_compute[257087]: 2025-12-05 10:29:07.671 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:29:07 compute-0 nova_compute[257087]: 2025-12-05 10:29:07.673 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:07 compute-0 nova_compute[257087]: 2025-12-05 10:29:07.673 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:29:07 compute-0 nova_compute[257087]: 2025-12-05 10:29:07.673 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:29:07 compute-0 nova_compute[257087]: 2025-12-05 10:29:07.673 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:29:07 compute-0 nova_compute[257087]: 2025-12-05 10:29:07.675 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:08 compute-0 ceph-mon[74418]: pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:08.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:08.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:08.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:09 compute-0 ceph-mon[74418]: pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:09 compute-0 nova_compute[257087]: 2025-12-05 10:29:09.802 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:29:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:10.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:10 compute-0 nova_compute[257087]: 2025-12-05 10:29:10.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:29:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:29:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:10.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:29:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:11 compute-0 nova_compute[257087]: 2025-12-05 10:29:11.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:29:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:12.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:12 compute-0 ceph-mon[74418]: pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:12 compute-0 nova_compute[257087]: 2025-12-05 10:29:12.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:29:12 compute-0 nova_compute[257087]: 2025-12-05 10:29:12.583 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:29:12 compute-0 nova_compute[257087]: 2025-12-05 10:29:12.584 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:29:12 compute-0 nova_compute[257087]: 2025-12-05 10:29:12.584 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:29:12 compute-0 nova_compute[257087]: 2025-12-05 10:29:12.584 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:29:12 compute-0 nova_compute[257087]: 2025-12-05 10:29:12.585 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:29:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:29:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:29:12 compute-0 nova_compute[257087]: 2025-12-05 10:29:12.675 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:12.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:29:13 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147021908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:29:13 compute-0 nova_compute[257087]: 2025-12-05 10:29:13.072 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:29:13 compute-0 nova_compute[257087]: 2025-12-05 10:29:13.310 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:29:13 compute-0 nova_compute[257087]: 2025-12-05 10:29:13.312 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4385MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:29:13 compute-0 nova_compute[257087]: 2025-12-05 10:29:13.312 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:29:13 compute-0 nova_compute[257087]: 2025-12-05 10:29:13.312 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:29:13 compute-0 nova_compute[257087]: 2025-12-05 10:29:13.382 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:29:13 compute-0 nova_compute[257087]: 2025-12-05 10:29:13.382 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:29:13 compute-0 nova_compute[257087]: 2025-12-05 10:29:13.403 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:29:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:29:13 compute-0 ceph-mon[74418]: pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:13 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3147021908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:29:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:13.795Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:14 compute-0 nova_compute[257087]: 2025-12-05 10:29:14.005 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:29:14 compute-0 nova_compute[257087]: 2025-12-05 10:29:14.013 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:29:14 compute-0 sudo[278705]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:14 compute-0 sshd-session[278690]: Received disconnect from 192.168.122.10 port 58584:11: disconnected by user
Dec 05 10:29:14 compute-0 sshd-session[278690]: Disconnected from user zuul 192.168.122.10 port 58584
Dec 05 10:29:14 compute-0 sshd-session[278664]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:29:14 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Dec 05 10:29:14 compute-0 systemd[1]: session-56.scope: Consumed 3min 23.207s CPU time, 826.6M memory peak, read 328.0M from disk, written 64.5M to disk.
Dec 05 10:29:14 compute-0 systemd-logind[789]: Session 56 logged out. Waiting for processes to exit.
Dec 05 10:29:14 compute-0 systemd-logind[789]: Removed session 56.
Dec 05 10:29:14 compute-0 sshd-session[287685]: Accepted publickey for zuul from 192.168.122.10 port 42636 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 10:29:14 compute-0 systemd-logind[789]: New session 57 of user zuul.
Dec 05 10:29:14 compute-0 systemd[1]: Started Session 57 of User zuul.
Dec 05 10:29:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:14.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:14 compute-0 sshd-session[287685]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 10:29:14 compute-0 sudo[287690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-12-05-tdztejr.tar.xz
Dec 05 10:29:14 compute-0 sudo[287690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:29:14 compute-0 sudo[287690]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:14 compute-0 sshd-session[287689]: Received disconnect from 192.168.122.10 port 42636:11: disconnected by user
Dec 05 10:29:14 compute-0 sshd-session[287689]: Disconnected from user zuul 192.168.122.10 port 42636
Dec 05 10:29:14 compute-0 sshd-session[287685]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:29:14 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Dec 05 10:29:14 compute-0 systemd-logind[789]: Session 57 logged out. Waiting for processes to exit.
Dec 05 10:29:14 compute-0 systemd-logind[789]: Removed session 57.
Dec 05 10:29:14 compute-0 sshd-session[287716]: Accepted publickey for zuul from 192.168.122.10 port 54126 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 10:29:14 compute-0 systemd-logind[789]: New session 58 of user zuul.
Dec 05 10:29:14 compute-0 systemd[1]: Started Session 58 of User zuul.
Dec 05 10:29:14 compute-0 sshd-session[287716]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 10:29:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:14 compute-0 sudo[287720]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Dec 05 10:29:14 compute-0 sudo[287720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:29:14 compute-0 sudo[287720]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:14 compute-0 sshd-session[287719]: Received disconnect from 192.168.122.10 port 54126:11: disconnected by user
Dec 05 10:29:14 compute-0 sshd-session[287719]: Disconnected from user zuul 192.168.122.10 port 54126
Dec 05 10:29:14 compute-0 sshd-session[287716]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:29:14 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Dec 05 10:29:14 compute-0 systemd-logind[789]: Session 58 logged out. Waiting for processes to exit.
Dec 05 10:29:14 compute-0 systemd-logind[789]: Removed session 58.
Dec 05 10:29:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:14.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:15] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:29:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:15] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:29:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1151084718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:29:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3710231162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:29:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:16.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:16 compute-0 sudo[287747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:29:16 compute-0 sudo[287747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:16 compute-0 sudo[287747]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:16 compute-0 nova_compute[257087]: 2025-12-05 10:29:16.754 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:29:16 compute-0 nova_compute[257087]: 2025-12-05 10:29:16.758 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:29:16 compute-0 nova_compute[257087]: 2025-12-05 10:29:16.759 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:29:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:16.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:17 compute-0 ceph-mon[74418]: pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:17.518Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:17 compute-0 nova_compute[257087]: 2025-12-05 10:29:17.678 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:17 compute-0 nova_compute[257087]: 2025-12-05 10:29:17.761 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:29:17 compute-0 nova_compute[257087]: 2025-12-05 10:29:17.762 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:29:17 compute-0 nova_compute[257087]: 2025-12-05 10:29:17.762 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:29:17 compute-0 nova_compute[257087]: 2025-12-05 10:29:17.762 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:29:17 compute-0 nova_compute[257087]: 2025-12-05 10:29:17.763 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:17.918471) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930557918616, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 700, "num_deletes": 251, "total_data_size": 1059442, "memory_usage": 1085176, "flush_reason": "Manual Compaction"}
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930557930096, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1040891, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34352, "largest_seqno": 35051, "table_properties": {"data_size": 1037059, "index_size": 1612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8861, "raw_average_key_size": 19, "raw_value_size": 1029399, "raw_average_value_size": 2318, "num_data_blocks": 69, "num_entries": 444, "num_filter_entries": 444, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930492, "oldest_key_time": 1764930492, "file_creation_time": 1764930557, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 11704 microseconds, and 5437 cpu microseconds.
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:17.930181) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1040891 bytes OK
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:17.930221) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:17.952763) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:17.952841) EVENT_LOG_v1 {"time_micros": 1764930557952810, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:17.952874) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1055767, prev total WAL file size 1055767, number of live WAL files 2.
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:17.953740) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(1016KB)], [74(12MB)]
Dec 05 10:29:17 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930557953915, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 13842990, "oldest_snapshot_seqno": -1}
Dec 05 10:29:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6467 keys, 11643095 bytes, temperature: kUnknown
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930558079831, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11643095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11603171, "index_size": 22662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 171835, "raw_average_key_size": 26, "raw_value_size": 11489665, "raw_average_value_size": 1776, "num_data_blocks": 883, "num_entries": 6467, "num_filter_entries": 6467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764930557, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:18.080297) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11643095 bytes
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:18.081689) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.8 rd, 92.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.2 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(24.5) write-amplify(11.2) OK, records in: 6983, records dropped: 516 output_compression: NoCompression
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:18.081707) EVENT_LOG_v1 {"time_micros": 1764930558081699, "job": 42, "event": "compaction_finished", "compaction_time_micros": 126034, "compaction_time_cpu_micros": 45875, "output_level": 6, "num_output_files": 1, "total_output_size": 11643095, "num_input_records": 6983, "num_output_records": 6467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930558081998, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930558084116, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:17.953522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:18.084270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:18.084283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:18.084285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:18.084287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:29:18 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:29:18.084289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:29:18 compute-0 ceph-mon[74418]: pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/931949681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:29:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/50834571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:29:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:29:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:18.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:29:18 compute-0 nova_compute[257087]: 2025-12-05 10:29:18.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:29:18 compute-0 nova_compute[257087]: 2025-12-05 10:29:18.531 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:29:18 compute-0 nova_compute[257087]: 2025-12-05 10:29:18.531 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:29:18 compute-0 nova_compute[257087]: 2025-12-05 10:29:18.577 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:29:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:18.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:18.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:19 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1396246351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:29:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:20.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:20 compute-0 ceph-mon[74418]: pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:29:20.587 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:29:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:29:20.588 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:29:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:29:20.589 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:29:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:20.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:21 compute-0 ceph-mon[74418]: pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:29:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:22.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:29:22 compute-0 nova_compute[257087]: 2025-12-05 10:29:22.679 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:22 compute-0 nova_compute[257087]: 2025-12-05 10:29:22.681 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:22.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:23 compute-0 ceph-mon[74418]: pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:23.796Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:24.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:24.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:25] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:29:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:25] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:29:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:26.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:26 compute-0 ceph-mon[74418]: pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:26.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:27.519Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:29:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:29:27 compute-0 nova_compute[257087]: 2025-12-05 10:29:27.681 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:27 compute-0 nova_compute[257087]: 2025-12-05 10:29:27.683 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:29:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:29:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:29:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:29:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:29:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:29:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:29:27
Dec 05 10:29:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:29:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:29:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.nfs', 'backups', '.rgw.root', 'volumes', 'images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec 05 10:29:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:29:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:29:28 compute-0 ceph-mon[74418]: pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:29:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:28.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:28.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:28.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:30 compute-0 ceph-mon[74418]: pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:30.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:30.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:32 compute-0 ceph-mon[74418]: pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:32.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:32 compute-0 nova_compute[257087]: 2025-12-05 10:29:32.683 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:32.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:33 compute-0 ceph-mon[74418]: pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:33.797Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:34.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:35] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:29:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:35] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:29:36 compute-0 ceph-mon[74418]: pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:36.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:36 compute-0 sudo[287793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:29:36 compute-0 sudo[287793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:36 compute-0 sudo[287793]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:36 compute-0 podman[287817]: 2025-12-05 10:29:36.921131907 +0000 UTC m=+0.067829175 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 10:29:36 compute-0 podman[287819]: 2025-12-05 10:29:36.930778078 +0000 UTC m=+0.074823875 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 05 10:29:36 compute-0 podman[287818]: 2025-12-05 10:29:36.957535556 +0000 UTC m=+0.104257016 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Dec 05 10:29:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:36.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:37.520Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:37 compute-0 nova_compute[257087]: 2025-12-05 10:29:37.685 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:38 compute-0 ceph-mon[74418]: pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:29:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:38.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:29:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:38.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:38.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:39 compute-0 ceph-mon[74418]: pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:40.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:42 compute-0 ceph-mon[74418]: pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:42.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:29:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:29:42 compute-0 nova_compute[257087]: 2025-12-05 10:29:42.688 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:42.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:29:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:43.799Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:44 compute-0 ceph-mon[74418]: pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:44.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:29:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:29:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:45] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:29:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:45] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:29:45 compute-0 ceph-mon[74418]: pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:46.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:47.521Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:47 compute-0 nova_compute[257087]: 2025-12-05 10:29:47.690 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:48 compute-0 ceph-mon[74418]: pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:48.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:48.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:48.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:49 compute-0 ceph-mon[74418]: pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:29:50 compute-0 sudo[287893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:29:50 compute-0 sudo[287893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:50 compute-0 sudo[287893]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:50 compute-0 sudo[287918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:29:50 compute-0 sudo[287918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:50.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:50 compute-0 sudo[287918]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:29:51 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:29:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:29:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:29:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:29:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:29:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:29:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:29:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:29:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:29:51 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:29:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:29:51 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:29:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:29:51 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:29:51 compute-0 sudo[287977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:29:51 compute-0 sudo[287977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:51 compute-0 sudo[287977]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:51 compute-0 sudo[288002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:29:51 compute-0 sudo[288002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:51 compute-0 podman[288068]: 2025-12-05 10:29:51.789299097 +0000 UTC m=+0.024512298 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:29:51 compute-0 podman[288068]: 2025-12-05 10:29:51.896598344 +0000 UTC m=+0.131811515 container create cdd0c361fe8a59e8d50f74a79f4f26e4391a00092ac8b7b0f1064665a06e6f41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:29:52 compute-0 systemd[1]: Started libpod-conmon-cdd0c361fe8a59e8d50f74a79f4f26e4391a00092ac8b7b0f1064665a06e6f41.scope.
Dec 05 10:29:52 compute-0 ceph-mon[74418]: pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:29:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:29:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:29:52 compute-0 ceph-mon[74418]: pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:29:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:29:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:29:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:29:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:29:52 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:29:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:29:52 compute-0 podman[288068]: 2025-12-05 10:29:52.145662583 +0000 UTC m=+0.380875774 container init cdd0c361fe8a59e8d50f74a79f4f26e4391a00092ac8b7b0f1064665a06e6f41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Dec 05 10:29:52 compute-0 podman[288068]: 2025-12-05 10:29:52.157626039 +0000 UTC m=+0.392839210 container start cdd0c361fe8a59e8d50f74a79f4f26e4391a00092ac8b7b0f1064665a06e6f41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 10:29:52 compute-0 podman[288068]: 2025-12-05 10:29:52.16466349 +0000 UTC m=+0.399876761 container attach cdd0c361fe8a59e8d50f74a79f4f26e4391a00092ac8b7b0f1064665a06e6f41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:29:52 compute-0 blissful_gauss[288084]: 167 167
Dec 05 10:29:52 compute-0 systemd[1]: libpod-cdd0c361fe8a59e8d50f74a79f4f26e4391a00092ac8b7b0f1064665a06e6f41.scope: Deactivated successfully.
Dec 05 10:29:52 compute-0 conmon[288084]: conmon cdd0c361fe8a59e8d50f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cdd0c361fe8a59e8d50f74a79f4f26e4391a00092ac8b7b0f1064665a06e6f41.scope/container/memory.events
Dec 05 10:29:52 compute-0 podman[288068]: 2025-12-05 10:29:52.169399789 +0000 UTC m=+0.404612960 container died cdd0c361fe8a59e8d50f74a79f4f26e4391a00092ac8b7b0f1064665a06e6f41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:29:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-536c0058707e03d706ec2069ed93515179c6eb7768fe70aaf8ad1ef21fb052d4-merged.mount: Deactivated successfully.
Dec 05 10:29:52 compute-0 podman[288068]: 2025-12-05 10:29:52.214128705 +0000 UTC m=+0.449341876 container remove cdd0c361fe8a59e8d50f74a79f4f26e4391a00092ac8b7b0f1064665a06e6f41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:29:52 compute-0 systemd[1]: libpod-conmon-cdd0c361fe8a59e8d50f74a79f4f26e4391a00092ac8b7b0f1064665a06e6f41.scope: Deactivated successfully.
Dec 05 10:29:52 compute-0 podman[288108]: 2025-12-05 10:29:52.372671553 +0000 UTC m=+0.028663360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:29:52 compute-0 podman[288108]: 2025-12-05 10:29:52.498434001 +0000 UTC m=+0.154425798 container create a960951063f6e76607c0c5f40ffcd24699dfc37763c5db7201c9443dfa31dc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_heyrovsky, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 10:29:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:52.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:52 compute-0 systemd[1]: Started libpod-conmon-a960951063f6e76607c0c5f40ffcd24699dfc37763c5db7201c9443dfa31dc4c.scope.
Dec 05 10:29:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fd38365c7b663f636aba6a55be3805151921697273577c1075a4481414010c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fd38365c7b663f636aba6a55be3805151921697273577c1075a4481414010c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fd38365c7b663f636aba6a55be3805151921697273577c1075a4481414010c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fd38365c7b663f636aba6a55be3805151921697273577c1075a4481414010c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fd38365c7b663f636aba6a55be3805151921697273577c1075a4481414010c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:52 compute-0 podman[288108]: 2025-12-05 10:29:52.607145846 +0000 UTC m=+0.263137643 container init a960951063f6e76607c0c5f40ffcd24699dfc37763c5db7201c9443dfa31dc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:29:52 compute-0 podman[288108]: 2025-12-05 10:29:52.619383179 +0000 UTC m=+0.275374956 container start a960951063f6e76607c0c5f40ffcd24699dfc37763c5db7201c9443dfa31dc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 05 10:29:52 compute-0 podman[288108]: 2025-12-05 10:29:52.623056619 +0000 UTC m=+0.279048396 container attach a960951063f6e76607c0c5f40ffcd24699dfc37763c5db7201c9443dfa31dc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_heyrovsky, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 10:29:52 compute-0 nova_compute[257087]: 2025-12-05 10:29:52.692 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:52 compute-0 admiring_heyrovsky[288127]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:29:52 compute-0 admiring_heyrovsky[288127]: --> All data devices are unavailable
Dec 05 10:29:53 compute-0 systemd[1]: libpod-a960951063f6e76607c0c5f40ffcd24699dfc37763c5db7201c9443dfa31dc4c.scope: Deactivated successfully.
Dec 05 10:29:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:53 compute-0 podman[288108]: 2025-12-05 10:29:53.000832708 +0000 UTC m=+0.656824495 container died a960951063f6e76607c0c5f40ffcd24699dfc37763c5db7201c9443dfa31dc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_heyrovsky, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:29:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:53.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:29:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-93fd38365c7b663f636aba6a55be3805151921697273577c1075a4481414010c-merged.mount: Deactivated successfully.
Dec 05 10:29:53 compute-0 podman[288108]: 2025-12-05 10:29:53.261649617 +0000 UTC m=+0.917641414 container remove a960951063f6e76607c0c5f40ffcd24699dfc37763c5db7201c9443dfa31dc4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 10:29:53 compute-0 sudo[288002]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:53 compute-0 systemd[1]: libpod-conmon-a960951063f6e76607c0c5f40ffcd24699dfc37763c5db7201c9443dfa31dc4c.scope: Deactivated successfully.
Dec 05 10:29:53 compute-0 sudo[288154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:29:53 compute-0 sudo[288154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:53 compute-0 sudo[288154]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:53 compute-0 sudo[288179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:29:53 compute-0 sudo[288179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:53.800Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:29:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:53.800Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:29:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:53.801Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:29:53 compute-0 podman[288243]: 2025-12-05 10:29:53.899531704 +0000 UTC m=+0.045545439 container create 6eca72ee37872fe78d578bad2f70177addcf86034f28ecc5a65a5733e633db8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 05 10:29:53 compute-0 systemd[1]: Started libpod-conmon-6eca72ee37872fe78d578bad2f70177addcf86034f28ecc5a65a5733e633db8b.scope.
Dec 05 10:29:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:29:53 compute-0 podman[288243]: 2025-12-05 10:29:53.88133382 +0000 UTC m=+0.027347575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:29:54 compute-0 podman[288243]: 2025-12-05 10:29:54.224854108 +0000 UTC m=+0.370867863 container init 6eca72ee37872fe78d578bad2f70177addcf86034f28ecc5a65a5733e633db8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:29:54 compute-0 podman[288243]: 2025-12-05 10:29:54.234005536 +0000 UTC m=+0.380019271 container start 6eca72ee37872fe78d578bad2f70177addcf86034f28ecc5a65a5733e633db8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_colden, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:29:54 compute-0 ceph-mon[74418]: pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:29:54 compute-0 sweet_colden[288260]: 167 167
Dec 05 10:29:54 compute-0 systemd[1]: libpod-6eca72ee37872fe78d578bad2f70177addcf86034f28ecc5a65a5733e633db8b.scope: Deactivated successfully.
Dec 05 10:29:54 compute-0 conmon[288260]: conmon 6eca72ee37872fe78d57 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6eca72ee37872fe78d578bad2f70177addcf86034f28ecc5a65a5733e633db8b.scope/container/memory.events
Dec 05 10:29:54 compute-0 podman[288243]: 2025-12-05 10:29:54.24373235 +0000 UTC m=+0.389746115 container attach 6eca72ee37872fe78d578bad2f70177addcf86034f28ecc5a65a5733e633db8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:29:54 compute-0 podman[288243]: 2025-12-05 10:29:54.244883222 +0000 UTC m=+0.390896987 container died 6eca72ee37872fe78d578bad2f70177addcf86034f28ecc5a65a5733e633db8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:29:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-321798cb329206e3f8930cd66aa7a4008419643213f86bb5622f5e801ca8350f-merged.mount: Deactivated successfully.
Dec 05 10:29:54 compute-0 podman[288243]: 2025-12-05 10:29:54.284099968 +0000 UTC m=+0.430113703 container remove 6eca72ee37872fe78d578bad2f70177addcf86034f28ecc5a65a5733e633db8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_colden, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:29:54 compute-0 systemd[1]: libpod-conmon-6eca72ee37872fe78d578bad2f70177addcf86034f28ecc5a65a5733e633db8b.scope: Deactivated successfully.
Dec 05 10:29:54 compute-0 podman[288286]: 2025-12-05 10:29:54.476439915 +0000 UTC m=+0.068354448 container create 8429f26b0af3977b5efca2863e361a09e22392a8c485e5ef665bf476b2c1745a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_northcutt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:29:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:54.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:54 compute-0 systemd[1]: Started libpod-conmon-8429f26b0af3977b5efca2863e361a09e22392a8c485e5ef665bf476b2c1745a.scope.
Dec 05 10:29:54 compute-0 podman[288286]: 2025-12-05 10:29:54.433094667 +0000 UTC m=+0.025009220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:29:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146bb3580e950aa986e5f0756a1d6766744683e9a27253e6482e846c435f8b05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146bb3580e950aa986e5f0756a1d6766744683e9a27253e6482e846c435f8b05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146bb3580e950aa986e5f0756a1d6766744683e9a27253e6482e846c435f8b05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146bb3580e950aa986e5f0756a1d6766744683e9a27253e6482e846c435f8b05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:54 compute-0 podman[288286]: 2025-12-05 10:29:54.581593924 +0000 UTC m=+0.173508467 container init 8429f26b0af3977b5efca2863e361a09e22392a8c485e5ef665bf476b2c1745a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_northcutt, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:29:54 compute-0 podman[288286]: 2025-12-05 10:29:54.588544742 +0000 UTC m=+0.180459275 container start 8429f26b0af3977b5efca2863e361a09e22392a8c485e5ef665bf476b2c1745a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_northcutt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:29:54 compute-0 podman[288286]: 2025-12-05 10:29:54.591342328 +0000 UTC m=+0.183256861 container attach 8429f26b0af3977b5efca2863e361a09e22392a8c485e5ef665bf476b2c1745a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_northcutt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]: {
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:     "1": [
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:         {
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             "devices": [
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "/dev/loop3"
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             ],
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             "lv_name": "ceph_lv0",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             "lv_size": "21470642176",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             "name": "ceph_lv0",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             "tags": {
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.cluster_name": "ceph",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.crush_device_class": "",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.encrypted": "0",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.osd_id": "1",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.type": "block",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.vdo": "0",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:                 "ceph.with_tpm": "0"
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             },
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             "type": "block",
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:             "vg_name": "ceph_vg0"
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:         }
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]:     ]
Dec 05 10:29:54 compute-0 eloquent_northcutt[288305]: }
Dec 05 10:29:54 compute-0 systemd[1]: libpod-8429f26b0af3977b5efca2863e361a09e22392a8c485e5ef665bf476b2c1745a.scope: Deactivated successfully.
Dec 05 10:29:54 compute-0 podman[288286]: 2025-12-05 10:29:54.93795108 +0000 UTC m=+0.529865613 container died 8429f26b0af3977b5efca2863e361a09e22392a8c485e5ef665bf476b2c1745a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_northcutt, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:29:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-146bb3580e950aa986e5f0756a1d6766744683e9a27253e6482e846c435f8b05-merged.mount: Deactivated successfully.
Dec 05 10:29:55 compute-0 podman[288286]: 2025-12-05 10:29:55.040143018 +0000 UTC m=+0.632057551 container remove 8429f26b0af3977b5efca2863e361a09e22392a8c485e5ef665bf476b2c1745a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 10:29:55 compute-0 systemd[1]: libpod-conmon-8429f26b0af3977b5efca2863e361a09e22392a8c485e5ef665bf476b2c1745a.scope: Deactivated successfully.
Dec 05 10:29:55 compute-0 sudo[288179]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:29:55 compute-0 sudo[288326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:29:55 compute-0 sudo[288326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:55 compute-0 sudo[288326]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:55 compute-0 sudo[288351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:29:55 compute-0 sudo[288351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:55] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:29:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:29:55] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:29:55 compute-0 podman[288417]: 2025-12-05 10:29:55.654364803 +0000 UTC m=+0.044594374 container create ae8b7517d55b4bb1d658a4413c6708d7667bc41e8e2108d9e61eb350a358c017 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bouman, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 10:29:55 compute-0 systemd[1]: Started libpod-conmon-ae8b7517d55b4bb1d658a4413c6708d7667bc41e8e2108d9e61eb350a358c017.scope.
Dec 05 10:29:55 compute-0 podman[288417]: 2025-12-05 10:29:55.636696043 +0000 UTC m=+0.026925634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:29:55 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:29:55 compute-0 podman[288417]: 2025-12-05 10:29:55.757120356 +0000 UTC m=+0.147349947 container init ae8b7517d55b4bb1d658a4413c6708d7667bc41e8e2108d9e61eb350a358c017 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bouman, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 10:29:55 compute-0 podman[288417]: 2025-12-05 10:29:55.766664915 +0000 UTC m=+0.156894486 container start ae8b7517d55b4bb1d658a4413c6708d7667bc41e8e2108d9e61eb350a358c017 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:29:55 compute-0 podman[288417]: 2025-12-05 10:29:55.770684084 +0000 UTC m=+0.160913685 container attach ae8b7517d55b4bb1d658a4413c6708d7667bc41e8e2108d9e61eb350a358c017 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bouman, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 10:29:55 compute-0 happy_bouman[288433]: 167 167
Dec 05 10:29:55 compute-0 systemd[1]: libpod-ae8b7517d55b4bb1d658a4413c6708d7667bc41e8e2108d9e61eb350a358c017.scope: Deactivated successfully.
Dec 05 10:29:55 compute-0 conmon[288433]: conmon ae8b7517d55b4bb1d658 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae8b7517d55b4bb1d658a4413c6708d7667bc41e8e2108d9e61eb350a358c017.scope/container/memory.events
Dec 05 10:29:55 compute-0 podman[288417]: 2025-12-05 10:29:55.775537056 +0000 UTC m=+0.165766637 container died ae8b7517d55b4bb1d658a4413c6708d7667bc41e8e2108d9e61eb350a358c017 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-79b09639951ef345b1c1f825ea78b419842513a601bc0d997f08a6dd289c7bfd-merged.mount: Deactivated successfully.
Dec 05 10:29:55 compute-0 podman[288417]: 2025-12-05 10:29:55.817875987 +0000 UTC m=+0.208105558 container remove ae8b7517d55b4bb1d658a4413c6708d7667bc41e8e2108d9e61eb350a358c017 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bouman, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 05 10:29:55 compute-0 systemd[1]: libpod-conmon-ae8b7517d55b4bb1d658a4413c6708d7667bc41e8e2108d9e61eb350a358c017.scope: Deactivated successfully.
Dec 05 10:29:55 compute-0 podman[288457]: 2025-12-05 10:29:55.99126834 +0000 UTC m=+0.045517418 container create bec668cafc5a6d8a5ffec4b2dc61aa8c92bbcb962a3c5fa0433b66ec02d1a8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kirch, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:29:56 compute-0 systemd[1]: Started libpod-conmon-bec668cafc5a6d8a5ffec4b2dc61aa8c92bbcb962a3c5fa0433b66ec02d1a8d1.scope.
Dec 05 10:29:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:29:56 compute-0 podman[288457]: 2025-12-05 10:29:55.972078588 +0000 UTC m=+0.026327686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eca597f20ad2097abea88eff0dbd75daa4e0b3ce8f5d8d8d39e70fe29f015c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eca597f20ad2097abea88eff0dbd75daa4e0b3ce8f5d8d8d39e70fe29f015c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eca597f20ad2097abea88eff0dbd75daa4e0b3ce8f5d8d8d39e70fe29f015c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eca597f20ad2097abea88eff0dbd75daa4e0b3ce8f5d8d8d39e70fe29f015c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:29:56 compute-0 podman[288457]: 2025-12-05 10:29:56.086892909 +0000 UTC m=+0.141142007 container init bec668cafc5a6d8a5ffec4b2dc61aa8c92bbcb962a3c5fa0433b66ec02d1a8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kirch, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:29:56 compute-0 podman[288457]: 2025-12-05 10:29:56.094437014 +0000 UTC m=+0.148686092 container start bec668cafc5a6d8a5ffec4b2dc61aa8c92bbcb962a3c5fa0433b66ec02d1a8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kirch, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:29:56 compute-0 podman[288457]: 2025-12-05 10:29:56.098307439 +0000 UTC m=+0.152556547 container attach bec668cafc5a6d8a5ffec4b2dc61aa8c92bbcb962a3c5fa0433b66ec02d1a8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kirch, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:29:56 compute-0 ceph-mon[74418]: pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:29:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:56.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:56 compute-0 lvm[288549]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:29:56 compute-0 lvm[288549]: VG ceph_vg0 finished
Dec 05 10:29:56 compute-0 festive_kirch[288473]: {}
Dec 05 10:29:56 compute-0 systemd[1]: libpod-bec668cafc5a6d8a5ffec4b2dc61aa8c92bbcb962a3c5fa0433b66ec02d1a8d1.scope: Deactivated successfully.
Dec 05 10:29:56 compute-0 systemd[1]: libpod-bec668cafc5a6d8a5ffec4b2dc61aa8c92bbcb962a3c5fa0433b66ec02d1a8d1.scope: Consumed 1.196s CPU time.
Dec 05 10:29:56 compute-0 podman[288457]: 2025-12-05 10:29:56.85250531 +0000 UTC m=+0.906754388 container died bec668cafc5a6d8a5ffec4b2dc61aa8c92bbcb962a3c5fa0433b66ec02d1a8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kirch, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 10:29:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-52eca597f20ad2097abea88eff0dbd75daa4e0b3ce8f5d8d8d39e70fe29f015c-merged.mount: Deactivated successfully.
Dec 05 10:29:56 compute-0 podman[288457]: 2025-12-05 10:29:56.903273449 +0000 UTC m=+0.957522557 container remove bec668cafc5a6d8a5ffec4b2dc61aa8c92bbcb962a3c5fa0433b66ec02d1a8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kirch, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 05 10:29:56 compute-0 systemd[1]: libpod-conmon-bec668cafc5a6d8a5ffec4b2dc61aa8c92bbcb962a3c5fa0433b66ec02d1a8d1.scope: Deactivated successfully.
Dec 05 10:29:56 compute-0 sudo[288560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:29:56 compute-0 sudo[288560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:56 compute-0 sudo[288560]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:56 compute-0 sudo[288351]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:29:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:29:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:29:56 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:29:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:29:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:57.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:29:57 compute-0 sudo[288588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:29:57 compute-0 sudo[288588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:29:57 compute-0 sudo[288588]: pam_unix(sudo:session): session closed for user root
Dec 05 10:29:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:29:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:57.522Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:29:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:57.523Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:29:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:29:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:29:57 compute-0 nova_compute[257087]: 2025-12-05 10:29:57.695 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:29:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:29:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:29:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:29:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:29:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:29:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:29:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:29:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:29:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:29:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:29:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:29:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:29:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:29:58 compute-0 ceph-mon[74418]: pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:29:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3747405873' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:29:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3747405873' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:29:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:29:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:29:58.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:29:58.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:29:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:29:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:29:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:29:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:29:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:30:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Dec 05 10:30:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:30:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.0 observed slow operation indications in BlueStore
Dec 05 10:30:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Dec 05 10:30:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec 05 10:30:00 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.qiwwqr on compute-1 is in error state
Dec 05 10:30:00 compute-0 ceph-mon[74418]: pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:30:00 compute-0 ceph-mon[74418]: Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Dec 05 10:30:00 compute-0 ceph-mon[74418]: [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:30:00 compute-0 ceph-mon[74418]:      osd.0 observed slow operation indications in BlueStore
Dec 05 10:30:00 compute-0 ceph-mon[74418]:      osd.1 observed slow operation indications in BlueStore
Dec 05 10:30:00 compute-0 ceph-mon[74418]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec 05 10:30:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:00 compute-0 ceph-mon[74418]:     daemon nfs.cephfs.0.0.compute-1.qiwwqr on compute-1 is in error state
Dec 05 10:30:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:00.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:30:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:30:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:30:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:01 compute-0 ceph-mon[74418]: pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:30:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:02.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:02 compute-0 nova_compute[257087]: 2025-12-05 10:30:02.697 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:30:02 compute-0 nova_compute[257087]: 2025-12-05 10:30:02.699 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:30:02 compute-0 nova_compute[257087]: 2025-12-05 10:30:02.700 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:30:02 compute-0 nova_compute[257087]: 2025-12-05 10:30:02.700 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:30:02 compute-0 nova_compute[257087]: 2025-12-05 10:30:02.855 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:30:02 compute-0 nova_compute[257087]: 2025-12-05 10:30:02.856 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:30:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:03.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:03.802Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:30:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:03.803Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:04.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:04 compute-0 ceph-mon[74418]: pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:05.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:05] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:30:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:05] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:30:06 compute-0 ceph-mon[74418]: pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:06.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:07.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:07 compute-0 podman[288623]: 2025-12-05 10:30:07.408503222 +0000 UTC m=+0.065355608 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:30:07 compute-0 podman[288625]: 2025-12-05 10:30:07.437781737 +0000 UTC m=+0.092431693 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:30:07 compute-0 podman[288624]: 2025-12-05 10:30:07.443636776 +0000 UTC m=+0.099736261 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:30:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:07.524Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:07 compute-0 nova_compute[257087]: 2025-12-05 10:30:07.855 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:30:07 compute-0 nova_compute[257087]: 2025-12-05 10:30:07.857 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:30:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:08.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:08 compute-0 ceph-mon[74418]: pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:08.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:09.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:09 compute-0 ceph-mon[74418]: pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:30:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:10.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:30:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:11.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:11 compute-0 nova_compute[257087]: 2025-12-05 10:30:11.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:30:11 compute-0 nova_compute[257087]: 2025-12-05 10:30:11.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:30:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:12 compute-0 ceph-mon[74418]: pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:12 compute-0 nova_compute[257087]: 2025-12-05 10:30:12.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:30:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:12.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:30:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:30:12 compute-0 nova_compute[257087]: 2025-12-05 10:30:12.858 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:30:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:13.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:30:13 compute-0 nova_compute[257087]: 2025-12-05 10:30:13.523 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:30:13 compute-0 nova_compute[257087]: 2025-12-05 10:30:13.544 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:30:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:13.804Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:14 compute-0 ceph-mon[74418]: pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:14.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:14 compute-0 nova_compute[257087]: 2025-12-05 10:30:14.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:30:14 compute-0 nova_compute[257087]: 2025-12-05 10:30:14.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:30:14 compute-0 nova_compute[257087]: 2025-12-05 10:30:14.556 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:30:14 compute-0 nova_compute[257087]: 2025-12-05 10:30:14.557 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:30:14 compute-0 nova_compute[257087]: 2025-12-05 10:30:14.557 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:30:14 compute-0 nova_compute[257087]: 2025-12-05 10:30:14.557 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:30:14 compute-0 nova_compute[257087]: 2025-12-05 10:30:14.557 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:30:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:30:14 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/361242515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.022 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:30:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:15.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:15 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2060071741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:30:15 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/361242515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.226 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.228 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4461MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.228 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.228 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.322 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.323 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.453 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:30:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:15] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:30:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:15] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:30:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:30:15 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972636427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.912 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.918 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.937 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.939 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:30:15 compute-0 nova_compute[257087]: 2025-12-05 10:30:15.939 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:30:16 compute-0 ceph-mon[74418]: pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3751891155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:30:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2211206128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:30:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/972636427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:30:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:16.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:17 compute-0 sudo[288741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:30:17 compute-0 sudo[288741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:30:17 compute-0 sudo[288741]: pam_unix(sudo:session): session closed for user root
Dec 05 10:30:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:30:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:17.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:17 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/943635493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:30:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:17.525Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:17 compute-0 nova_compute[257087]: 2025-12-05 10:30:17.860 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:30:17 compute-0 nova_compute[257087]: 2025-12-05 10:30:17.939 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:30:17 compute-0 nova_compute[257087]: 2025-12-05 10:30:17.940 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:30:17 compute-0 nova_compute[257087]: 2025-12-05 10:30:17.940 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:30:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:18 compute-0 ceph-mon[74418]: pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:30:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:18.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:18.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:19.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:19 compute-0 nova_compute[257087]: 2025-12-05 10:30:19.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:30:19 compute-0 nova_compute[257087]: 2025-12-05 10:30:19.530 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:30:19 compute-0 nova_compute[257087]: 2025-12-05 10:30:19.530 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:30:19 compute-0 nova_compute[257087]: 2025-12-05 10:30:19.549 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:30:20 compute-0 ceph-mon[74418]: pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:20.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:30:20.589 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:30:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:30:20.590 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:30:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:30:20.590 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:30:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:30:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:21.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:22 compute-0 ceph-mon[74418]: pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:30:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:22.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:22 compute-0 nova_compute[257087]: 2025-12-05 10:30:22.863 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:30:22 compute-0 nova_compute[257087]: 2025-12-05 10:30:22.865 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:30:22 compute-0 nova_compute[257087]: 2025-12-05 10:30:22.865 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:30:22 compute-0 nova_compute[257087]: 2025-12-05 10:30:22.865 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:30:22 compute-0 nova_compute[257087]: 2025-12-05 10:30:22.915 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:30:22 compute-0 nova_compute[257087]: 2025-12-05 10:30:22.916 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:30:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:30:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:23.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:23.805Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:24 compute-0 ceph-mon[74418]: pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:30:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:24.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:25.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:25] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:30:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:25] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:30:26 compute-0 ceph-mon[74418]: pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:30:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:27.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:30:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:27.526Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:30:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:30:27
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', '.nfs', 'images']
Dec 05 10:30:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:30:27 compute-0 nova_compute[257087]: 2025-12-05 10:30:27.917 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:30:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:30:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:30:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:28.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:28.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:30:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:29.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:29 compute-0 ceph-mon[74418]: pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:30:29 compute-0 ceph-mgr[74711]: [devicehealth INFO root] Check health
Dec 05 10:30:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:30.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:30 compute-0 ceph-mon[74418]: pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:31.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:31 compute-0 ceph-mon[74418]: pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:32.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:32 compute-0 nova_compute[257087]: 2025-12-05 10:30:32.920 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:30:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:33.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:33.806Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:34 compute-0 ceph-mon[74418]: pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:34.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:35.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:35] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:30:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:35] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:30:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:30:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:36.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:30:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:37 compute-0 sudo[288786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:30:37 compute-0 sudo[288786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:30:37 compute-0 sudo[288786]: pam_unix(sudo:session): session closed for user root
Dec 05 10:30:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:37.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:37 compute-0 ceph-mon[74418]: pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:37.527Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:37 compute-0 nova_compute[257087]: 2025-12-05 10:30:37.923 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:30:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:38 compute-0 ceph-mon[74418]: pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:38 compute-0 podman[288811]: 2025-12-05 10:30:38.444814166 +0000 UTC m=+0.076821169 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:30:38 compute-0 podman[288813]: 2025-12-05 10:30:38.450646755 +0000 UTC m=+0.082653448 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:30:38 compute-0 podman[288812]: 2025-12-05 10:30:38.48431149 +0000 UTC m=+0.116318473 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 05 10:30:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:38.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:38.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:39.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:40 compute-0 ceph-mon[74418]: pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:40.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:41.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:41 compute-0 ceph-mon[74418]: pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:30:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:42.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:30:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:30:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:30:42 compute-0 nova_compute[257087]: 2025-12-05 10:30:42.925 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:30:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:43.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:30:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:43.807Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:44 compute-0 ceph-mon[74418]: pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:44.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:45.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:45] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:30:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:45] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:30:46 compute-0 ceph-mon[74418]: pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:46.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:47.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:47.528Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:47 compute-0 nova_compute[257087]: 2025-12-05 10:30:47.927 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:30:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:48.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:48.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:49.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:49 compute-0 ceph-mon[74418]: pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:50 compute-0 ceph-mon[74418]: pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:50.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:51.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:52.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:52 compute-0 ceph-mon[74418]: pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:52 compute-0 nova_compute[257087]: 2025-12-05 10:30:52.929 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:30:52 compute-0 nova_compute[257087]: 2025-12-05 10:30:52.931 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:30:52 compute-0 nova_compute[257087]: 2025-12-05 10:30:52.931 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:30:52 compute-0 nova_compute[257087]: 2025-12-05 10:30:52.931 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:30:52 compute-0 nova_compute[257087]: 2025-12-05 10:30:52.984 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:30:52 compute-0 nova_compute[257087]: 2025-12-05 10:30:52.985 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:30:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:53.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:53.809Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:54.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:54 compute-0 ceph-mon[74418]: pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:55.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:55] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:30:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:30:55] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:30:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:30:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:56.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:57.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:57 compute-0 sudo[288892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:30:57 compute-0 sudo[288892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:30:57 compute-0 sudo[288892]: pam_unix(sudo:session): session closed for user root
Dec 05 10:30:57 compute-0 ceph-mon[74418]: pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:30:57 compute-0 sudo[288917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:30:57 compute-0 sudo[288917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:30:57 compute-0 sudo[288917]: pam_unix(sudo:session): session closed for user root
Dec 05 10:30:57 compute-0 sudo[288942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:30:57 compute-0 sudo[288942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:30:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:57.530Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:30:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:30:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:30:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:30:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:30:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:30:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:30:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:30:57 compute-0 nova_compute[257087]: 2025-12-05 10:30:57.986 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:30:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:30:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:30:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:30:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:30:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:30:58 compute-0 sudo[288942]: pam_unix(sudo:session): session closed for user root
Dec 05 10:30:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:30:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:30:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:30:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:30:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:30:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:30:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:30:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:30:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:30:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:30:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:30:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:30:58 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:30:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:30:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:30:58 compute-0 sudo[288999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:30:58 compute-0 sudo[288999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:30:58 compute-0 sudo[288999]: pam_unix(sudo:session): session closed for user root
Dec 05 10:30:58 compute-0 sudo[289024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:30:58 compute-0 sudo[289024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:30:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:30:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:30:58.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:30:58 compute-0 podman[289091]: 2025-12-05 10:30:58.676251225 +0000 UTC m=+0.025081603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:30:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:30:58.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:30:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:30:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:30:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:30:59.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:30:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=plugins.update.checker t=2025-12-05T10:30:59.316753054Z level=info msg="Update check succeeded" duration=50.129333ms
Dec 05 10:30:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=grafana.update.checker t=2025-12-05T10:30:59.32690447Z level=info msg="Update check succeeded" duration=50.985565ms
Dec 05 10:30:59 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/206009250' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:30:59 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/206009250' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:30:59 compute-0 ceph-mon[74418]: pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:30:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:30:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:30:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:30:59 compute-0 ceph-mon[74418]: pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:30:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:30:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:30:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:30:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:30:59 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:30:59 compute-0 podman[289091]: 2025-12-05 10:30:59.621781575 +0000 UTC m=+0.970611923 container create 5969e3f56ada938866ba6f56c9c8b163ff009b6c0dcfbe3e1503b3d6541e3761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:30:59 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0[106714]: logger=cleanup t=2025-12-05T10:30:59.62893053Z level=info msg="Completed cleanup jobs" duration=444.092552ms
Dec 05 10:30:59 compute-0 systemd[1]: Started libpod-conmon-5969e3f56ada938866ba6f56c9c8b163ff009b6c0dcfbe3e1503b3d6541e3761.scope.
Dec 05 10:30:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:31:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:31:00 compute-0 podman[289091]: 2025-12-05 10:31:00.117336365 +0000 UTC m=+1.466166753 container init 5969e3f56ada938866ba6f56c9c8b163ff009b6c0dcfbe3e1503b3d6541e3761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:31:00 compute-0 podman[289091]: 2025-12-05 10:31:00.127558113 +0000 UTC m=+1.476388501 container start 5969e3f56ada938866ba6f56c9c8b163ff009b6c0dcfbe3e1503b3d6541e3761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_yalow, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 05 10:31:00 compute-0 unruffled_yalow[289107]: 167 167
Dec 05 10:31:00 compute-0 systemd[1]: libpod-5969e3f56ada938866ba6f56c9c8b163ff009b6c0dcfbe3e1503b3d6541e3761.scope: Deactivated successfully.
Dec 05 10:31:00 compute-0 conmon[289107]: conmon 5969e3f56ada938866ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5969e3f56ada938866ba6f56c9c8b163ff009b6c0dcfbe3e1503b3d6541e3761.scope/container/memory.events
Dec 05 10:31:00 compute-0 podman[289091]: 2025-12-05 10:31:00.391575439 +0000 UTC m=+1.740405797 container attach 5969e3f56ada938866ba6f56c9c8b163ff009b6c0dcfbe3e1503b3d6541e3761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_yalow, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 10:31:00 compute-0 podman[289091]: 2025-12-05 10:31:00.393637786 +0000 UTC m=+1.742468214 container died 5969e3f56ada938866ba6f56c9c8b163ff009b6c0dcfbe3e1503b3d6541e3761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 10:31:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7c95e86f25fb93ce2a513a137424147e8d729a2008adea2fe98dadb5ed3c2f0-merged.mount: Deactivated successfully.
Dec 05 10:31:00 compute-0 podman[289091]: 2025-12-05 10:31:00.467065681 +0000 UTC m=+1.815896029 container remove 5969e3f56ada938866ba6f56c9c8b163ff009b6c0dcfbe3e1503b3d6541e3761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 10:31:00 compute-0 systemd[1]: libpod-conmon-5969e3f56ada938866ba6f56c9c8b163ff009b6c0dcfbe3e1503b3d6541e3761.scope: Deactivated successfully.
Dec 05 10:31:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:00.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:00 compute-0 podman[289132]: 2025-12-05 10:31:00.643516707 +0000 UTC m=+0.051347247 container create 61a578f4e2e0d72e1b2725783f09f012d4590a05bcd3904e5ac48485e50fefbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:31:00 compute-0 ceph-mon[74418]: pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:31:00 compute-0 systemd[1]: Started libpod-conmon-61a578f4e2e0d72e1b2725783f09f012d4590a05bcd3904e5ac48485e50fefbf.scope.
Dec 05 10:31:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ccfdade093492d25d458de4b4d6990576872cee5b21206c9a01bc544921937/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ccfdade093492d25d458de4b4d6990576872cee5b21206c9a01bc544921937/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ccfdade093492d25d458de4b4d6990576872cee5b21206c9a01bc544921937/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ccfdade093492d25d458de4b4d6990576872cee5b21206c9a01bc544921937/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ccfdade093492d25d458de4b4d6990576872cee5b21206c9a01bc544921937/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:00 compute-0 podman[289132]: 2025-12-05 10:31:00.622545287 +0000 UTC m=+0.030375837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:31:00 compute-0 podman[289132]: 2025-12-05 10:31:00.725821524 +0000 UTC m=+0.133652074 container init 61a578f4e2e0d72e1b2725783f09f012d4590a05bcd3904e5ac48485e50fefbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:31:00 compute-0 podman[289132]: 2025-12-05 10:31:00.732955288 +0000 UTC m=+0.140785818 container start 61a578f4e2e0d72e1b2725783f09f012d4590a05bcd3904e5ac48485e50fefbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:31:00 compute-0 podman[289132]: 2025-12-05 10:31:00.73633296 +0000 UTC m=+0.144163490 container attach 61a578f4e2e0d72e1b2725783f09f012d4590a05bcd3904e5ac48485e50fefbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 10:31:01 compute-0 kind_chaum[289148]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:31:01 compute-0 kind_chaum[289148]: --> All data devices are unavailable
Dec 05 10:31:01 compute-0 systemd[1]: libpod-61a578f4e2e0d72e1b2725783f09f012d4590a05bcd3904e5ac48485e50fefbf.scope: Deactivated successfully.
Dec 05 10:31:01 compute-0 podman[289132]: 2025-12-05 10:31:01.09246112 +0000 UTC m=+0.500291650 container died 61a578f4e2e0d72e1b2725783f09f012d4590a05bcd3904e5ac48485e50fefbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:31:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-62ccfdade093492d25d458de4b4d6990576872cee5b21206c9a01bc544921937-merged.mount: Deactivated successfully.
Dec 05 10:31:01 compute-0 podman[289132]: 2025-12-05 10:31:01.136727363 +0000 UTC m=+0.544557883 container remove 61a578f4e2e0d72e1b2725783f09f012d4590a05bcd3904e5ac48485e50fefbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:31:01 compute-0 systemd[1]: libpod-conmon-61a578f4e2e0d72e1b2725783f09f012d4590a05bcd3904e5ac48485e50fefbf.scope: Deactivated successfully.
Dec 05 10:31:01 compute-0 sudo[289024]: pam_unix(sudo:session): session closed for user root
Dec 05 10:31:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:01.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:01 compute-0 sudo[289173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:31:01 compute-0 sudo[289173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:31:01 compute-0 sudo[289173]: pam_unix(sudo:session): session closed for user root
Dec 05 10:31:01 compute-0 sudo[289198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:31:01 compute-0 sudo[289198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:31:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:01 compute-0 podman[289265]: 2025-12-05 10:31:01.730450891 +0000 UTC m=+0.046000352 container create cf74291390385abba958676b28f6cb2261b29155105cc3f7107a49eb1bb63fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 10:31:01 compute-0 systemd[1]: Started libpod-conmon-cf74291390385abba958676b28f6cb2261b29155105cc3f7107a49eb1bb63fcd.scope.
Dec 05 10:31:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:31:01 compute-0 podman[289265]: 2025-12-05 10:31:01.71277975 +0000 UTC m=+0.028329241 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:31:01 compute-0 podman[289265]: 2025-12-05 10:31:01.814575507 +0000 UTC m=+0.130124988 container init cf74291390385abba958676b28f6cb2261b29155105cc3f7107a49eb1bb63fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 10:31:01 compute-0 podman[289265]: 2025-12-05 10:31:01.82276562 +0000 UTC m=+0.138315081 container start cf74291390385abba958676b28f6cb2261b29155105cc3f7107a49eb1bb63fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:31:01 compute-0 podman[289265]: 2025-12-05 10:31:01.825851714 +0000 UTC m=+0.141401205 container attach cf74291390385abba958676b28f6cb2261b29155105cc3f7107a49eb1bb63fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:31:01 compute-0 pensive_yalow[289281]: 167 167
Dec 05 10:31:01 compute-0 systemd[1]: libpod-cf74291390385abba958676b28f6cb2261b29155105cc3f7107a49eb1bb63fcd.scope: Deactivated successfully.
Dec 05 10:31:01 compute-0 podman[289265]: 2025-12-05 10:31:01.8293948 +0000 UTC m=+0.144944281 container died cf74291390385abba958676b28f6cb2261b29155105cc3f7107a49eb1bb63fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:31:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6461e64bb8cafbd4604b100a8dba985f7b3b3c0796aa25ce5302c1d44ace964f-merged.mount: Deactivated successfully.
Dec 05 10:31:01 compute-0 podman[289265]: 2025-12-05 10:31:01.868100272 +0000 UTC m=+0.183649723 container remove cf74291390385abba958676b28f6cb2261b29155105cc3f7107a49eb1bb63fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:31:01 compute-0 systemd[1]: libpod-conmon-cf74291390385abba958676b28f6cb2261b29155105cc3f7107a49eb1bb63fcd.scope: Deactivated successfully.
Dec 05 10:31:02 compute-0 podman[289306]: 2025-12-05 10:31:02.046012177 +0000 UTC m=+0.052663421 container create 7f9b06e922266d502c344924735faa10b9956dc0c75105a3c7df2ef150f32450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_raman, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:31:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:31:02 compute-0 systemd[1]: Started libpod-conmon-7f9b06e922266d502c344924735faa10b9956dc0c75105a3c7df2ef150f32450.scope.
Dec 05 10:31:02 compute-0 podman[289306]: 2025-12-05 10:31:02.019291182 +0000 UTC m=+0.025942516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:31:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0caeb15e6ce7151544611730f92ae0582dfe33b3783bca5b981f7a706fe7600/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0caeb15e6ce7151544611730f92ae0582dfe33b3783bca5b981f7a706fe7600/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0caeb15e6ce7151544611730f92ae0582dfe33b3783bca5b981f7a706fe7600/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0caeb15e6ce7151544611730f92ae0582dfe33b3783bca5b981f7a706fe7600/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:02 compute-0 podman[289306]: 2025-12-05 10:31:02.137087113 +0000 UTC m=+0.143738377 container init 7f9b06e922266d502c344924735faa10b9956dc0c75105a3c7df2ef150f32450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_raman, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 10:31:02 compute-0 podman[289306]: 2025-12-05 10:31:02.144264698 +0000 UTC m=+0.150915942 container start 7f9b06e922266d502c344924735faa10b9956dc0c75105a3c7df2ef150f32450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:31:02 compute-0 podman[289306]: 2025-12-05 10:31:02.147948139 +0000 UTC m=+0.154599473 container attach 7f9b06e922266d502c344924735faa10b9956dc0c75105a3c7df2ef150f32450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_raman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 10:31:02 compute-0 crazy_raman[289322]: {
Dec 05 10:31:02 compute-0 crazy_raman[289322]:     "1": [
Dec 05 10:31:02 compute-0 crazy_raman[289322]:         {
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             "devices": [
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "/dev/loop3"
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             ],
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             "lv_name": "ceph_lv0",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             "lv_size": "21470642176",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             "name": "ceph_lv0",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             "tags": {
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.cluster_name": "ceph",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.crush_device_class": "",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.encrypted": "0",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.osd_id": "1",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.type": "block",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.vdo": "0",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:                 "ceph.with_tpm": "0"
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             },
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             "type": "block",
Dec 05 10:31:02 compute-0 crazy_raman[289322]:             "vg_name": "ceph_vg0"
Dec 05 10:31:02 compute-0 crazy_raman[289322]:         }
Dec 05 10:31:02 compute-0 crazy_raman[289322]:     ]
Dec 05 10:31:02 compute-0 crazy_raman[289322]: }
Dec 05 10:31:02 compute-0 systemd[1]: libpod-7f9b06e922266d502c344924735faa10b9956dc0c75105a3c7df2ef150f32450.scope: Deactivated successfully.
Dec 05 10:31:02 compute-0 podman[289306]: 2025-12-05 10:31:02.47799221 +0000 UTC m=+0.484643464 container died 7f9b06e922266d502c344924735faa10b9956dc0c75105a3c7df2ef150f32450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_raman, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:31:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0caeb15e6ce7151544611730f92ae0582dfe33b3783bca5b981f7a706fe7600-merged.mount: Deactivated successfully.
Dec 05 10:31:02 compute-0 podman[289306]: 2025-12-05 10:31:02.534974788 +0000 UTC m=+0.541626032 container remove 7f9b06e922266d502c344924735faa10b9956dc0c75105a3c7df2ef150f32450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:31:02 compute-0 systemd[1]: libpod-conmon-7f9b06e922266d502c344924735faa10b9956dc0c75105a3c7df2ef150f32450.scope: Deactivated successfully.
Dec 05 10:31:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:31:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:02.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:31:02 compute-0 sudo[289198]: pam_unix(sudo:session): session closed for user root
Dec 05 10:31:02 compute-0 sudo[289346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:31:02 compute-0 sudo[289346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:31:02 compute-0 sudo[289346]: pam_unix(sudo:session): session closed for user root
Dec 05 10:31:02 compute-0 sudo[289371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:31:02 compute-0 sudo[289371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:31:02 compute-0 nova_compute[257087]: 2025-12-05 10:31:02.988 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:03 compute-0 ceph-mon[74418]: pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:31:03 compute-0 podman[289438]: 2025-12-05 10:31:03.166872664 +0000 UTC m=+0.041155560 container create fb65cc0b7c9b36d1e09a6153a10a99d85f02c3da8bf4605593b142b5ec8b23d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_spence, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 10:31:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:03.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:03 compute-0 systemd[1]: Started libpod-conmon-fb65cc0b7c9b36d1e09a6153a10a99d85f02c3da8bf4605593b142b5ec8b23d6.scope.
Dec 05 10:31:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:31:03 compute-0 podman[289438]: 2025-12-05 10:31:03.148475764 +0000 UTC m=+0.022758750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:31:03 compute-0 podman[289438]: 2025-12-05 10:31:03.255191394 +0000 UTC m=+0.129474310 container init fb65cc0b7c9b36d1e09a6153a10a99d85f02c3da8bf4605593b142b5ec8b23d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_spence, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 05 10:31:03 compute-0 podman[289438]: 2025-12-05 10:31:03.262631767 +0000 UTC m=+0.136914663 container start fb65cc0b7c9b36d1e09a6153a10a99d85f02c3da8bf4605593b142b5ec8b23d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 05 10:31:03 compute-0 podman[289438]: 2025-12-05 10:31:03.265818893 +0000 UTC m=+0.140101789 container attach fb65cc0b7c9b36d1e09a6153a10a99d85f02c3da8bf4605593b142b5ec8b23d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_spence, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:31:03 compute-0 quirky_spence[289454]: 167 167
Dec 05 10:31:03 compute-0 systemd[1]: libpod-fb65cc0b7c9b36d1e09a6153a10a99d85f02c3da8bf4605593b142b5ec8b23d6.scope: Deactivated successfully.
Dec 05 10:31:03 compute-0 podman[289438]: 2025-12-05 10:31:03.268491516 +0000 UTC m=+0.142774412 container died fb65cc0b7c9b36d1e09a6153a10a99d85f02c3da8bf4605593b142b5ec8b23d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_spence, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-36ef54227f8fdcd1ea5a4e77e564be822078704e1669a7d2adf8edfe9ea8eb82-merged.mount: Deactivated successfully.
Dec 05 10:31:03 compute-0 podman[289438]: 2025-12-05 10:31:03.306278073 +0000 UTC m=+0.180560979 container remove fb65cc0b7c9b36d1e09a6153a10a99d85f02c3da8bf4605593b142b5ec8b23d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:31:03 compute-0 systemd[1]: libpod-conmon-fb65cc0b7c9b36d1e09a6153a10a99d85f02c3da8bf4605593b142b5ec8b23d6.scope: Deactivated successfully.
Dec 05 10:31:03 compute-0 podman[289477]: 2025-12-05 10:31:03.470677672 +0000 UTC m=+0.042472346 container create e38afb7829bbbeeac1f82ea39c9ed8196c010a5af84e158c6c192eec1e0bef3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:31:03 compute-0 systemd[1]: Started libpod-conmon-e38afb7829bbbeeac1f82ea39c9ed8196c010a5af84e158c6c192eec1e0bef3b.scope.
Dec 05 10:31:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152b02555fcd131e2b2f55914e2bfde2a62591855b8b052a10e30684cd378b04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152b02555fcd131e2b2f55914e2bfde2a62591855b8b052a10e30684cd378b04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152b02555fcd131e2b2f55914e2bfde2a62591855b8b052a10e30684cd378b04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152b02555fcd131e2b2f55914e2bfde2a62591855b8b052a10e30684cd378b04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:31:03 compute-0 podman[289477]: 2025-12-05 10:31:03.45294097 +0000 UTC m=+0.024735694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:31:03 compute-0 podman[289477]: 2025-12-05 10:31:03.553953665 +0000 UTC m=+0.125748399 container init e38afb7829bbbeeac1f82ea39c9ed8196c010a5af84e158c6c192eec1e0bef3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 10:31:03 compute-0 podman[289477]: 2025-12-05 10:31:03.561640074 +0000 UTC m=+0.133434748 container start e38afb7829bbbeeac1f82ea39c9ed8196c010a5af84e158c6c192eec1e0bef3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kepler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:31:03 compute-0 podman[289477]: 2025-12-05 10:31:03.564716128 +0000 UTC m=+0.136510822 container attach e38afb7829bbbeeac1f82ea39c9ed8196c010a5af84e158c6c192eec1e0bef3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kepler, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 10:31:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:03.810Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:31:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:03.812Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:31:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:03.812Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:31:04 compute-0 ceph-mon[74418]: pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:31:04 compute-0 lvm[289569]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:31:04 compute-0 lvm[289569]: VG ceph_vg0 finished
Dec 05 10:31:04 compute-0 adoring_kepler[289494]: {}
Dec 05 10:31:04 compute-0 systemd[1]: libpod-e38afb7829bbbeeac1f82ea39c9ed8196c010a5af84e158c6c192eec1e0bef3b.scope: Deactivated successfully.
Dec 05 10:31:04 compute-0 systemd[1]: libpod-e38afb7829bbbeeac1f82ea39c9ed8196c010a5af84e158c6c192eec1e0bef3b.scope: Consumed 1.239s CPU time.
Dec 05 10:31:04 compute-0 podman[289477]: 2025-12-05 10:31:04.327332096 +0000 UTC m=+0.899126780 container died e38afb7829bbbeeac1f82ea39c9ed8196c010a5af84e158c6c192eec1e0bef3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kepler, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:31:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-152b02555fcd131e2b2f55914e2bfde2a62591855b8b052a10e30684cd378b04-merged.mount: Deactivated successfully.
Dec 05 10:31:04 compute-0 podman[289477]: 2025-12-05 10:31:04.376080011 +0000 UTC m=+0.947874685 container remove e38afb7829bbbeeac1f82ea39c9ed8196c010a5af84e158c6c192eec1e0bef3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:31:04 compute-0 systemd[1]: libpod-conmon-e38afb7829bbbeeac1f82ea39c9ed8196c010a5af84e158c6c192eec1e0bef3b.scope: Deactivated successfully.
Dec 05 10:31:04 compute-0 sudo[289371]: pam_unix(sudo:session): session closed for user root
Dec 05 10:31:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:31:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:31:04 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:31:04 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:31:04 compute-0 sudo[289587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:31:04 compute-0 sudo[289587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:31:04 compute-0 sudo[289587]: pam_unix(sudo:session): session closed for user root
Dec 05 10:31:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:04.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:05.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:05] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec 05 10:31:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:05] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec 05 10:31:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:31:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:31:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:31:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:06.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:06 compute-0 ceph-mon[74418]: pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:31:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:07.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:07.532Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:07 compute-0 nova_compute[257087]: 2025-12-05 10:31:07.990 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:07 compute-0 nova_compute[257087]: 2025-12-05 10:31:07.992 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:31:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:31:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:08.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:31:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:08.909Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:31:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:08.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:31:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:08.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:09.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:09 compute-0 podman[289618]: 2025-12-05 10:31:09.403670405 +0000 UTC m=+0.057740730 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 10:31:09 compute-0 podman[289620]: 2025-12-05 10:31:09.442280285 +0000 UTC m=+0.093984156 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:31:09 compute-0 podman[289619]: 2025-12-05 10:31:09.456492001 +0000 UTC m=+0.106537796 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:31:09 compute-0 ceph-mon[74418]: pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:31:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:10.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:11.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:11 compute-0 ceph-mon[74418]: pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:12.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:31:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:31:12 compute-0 nova_compute[257087]: 2025-12-05 10:31:12.992 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:13.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:13 compute-0 nova_compute[257087]: 2025-12-05 10:31:13.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:31:13 compute-0 nova_compute[257087]: 2025-12-05 10:31:13.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:31:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:13.813Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:31:14 compute-0 ceph-mon[74418]: pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:31:14 compute-0 nova_compute[257087]: 2025-12-05 10:31:14.526 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:31:14 compute-0 nova_compute[257087]: 2025-12-05 10:31:14.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:31:14 compute-0 nova_compute[257087]: 2025-12-05 10:31:14.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:31:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:14.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:14 compute-0 nova_compute[257087]: 2025-12-05 10:31:14.699 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:31:14 compute-0 nova_compute[257087]: 2025-12-05 10:31:14.699 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:31:14 compute-0 nova_compute[257087]: 2025-12-05 10:31:14.699 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:31:14 compute-0 nova_compute[257087]: 2025-12-05 10:31:14.700 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:31:14 compute-0 nova_compute[257087]: 2025-12-05 10:31:14.700 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:31:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:31:15 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3939924680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:31:15 compute-0 nova_compute[257087]: 2025-12-05 10:31:15.156 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:31:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:15.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:15 compute-0 nova_compute[257087]: 2025-12-05 10:31:15.320 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:31:15 compute-0 nova_compute[257087]: 2025-12-05 10:31:15.321 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4468MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:31:15 compute-0 nova_compute[257087]: 2025-12-05 10:31:15.321 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:31:15 compute-0 nova_compute[257087]: 2025-12-05 10:31:15.321 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:31:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:15] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:31:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:15] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:31:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:31:16 compute-0 ceph-mon[74418]: pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:31:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1890903866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:31:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3939924680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:31:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:16.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:16 compute-0 nova_compute[257087]: 2025-12-05 10:31:16.942 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:31:16 compute-0 nova_compute[257087]: 2025-12-05 10:31:16.943 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:31:16 compute-0 nova_compute[257087]: 2025-12-05 10:31:16.962 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:31:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:17.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:17 compute-0 sudo[289728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:31:17 compute-0 sudo[289728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:31:17 compute-0 sudo[289728]: pam_unix(sudo:session): session closed for user root
Dec 05 10:31:17 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1341485937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:31:17 compute-0 ceph-mon[74418]: pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:31:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:31:17 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/873280184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:31:17 compute-0 nova_compute[257087]: 2025-12-05 10:31:17.488 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:31:17 compute-0 nova_compute[257087]: 2025-12-05 10:31:17.495 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:31:17 compute-0 nova_compute[257087]: 2025-12-05 10:31:17.516 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:31:17 compute-0 nova_compute[257087]: 2025-12-05 10:31:17.520 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:31:17 compute-0 nova_compute[257087]: 2025-12-05 10:31:17.521 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:31:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:17.533Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:31:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:17.534Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:17 compute-0 nova_compute[257087]: 2025-12-05 10:31:17.992 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:17 compute-0 nova_compute[257087]: 2025-12-05 10:31:17.996 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:31:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/873280184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:31:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2737310631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:31:18 compute-0 ceph-mon[74418]: pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:31:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:18.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:18.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:31:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:19.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:31:19 compute-0 nova_compute[257087]: 2025-12-05 10:31:19.523 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:31:19 compute-0 nova_compute[257087]: 2025-12-05 10:31:19.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:31:19 compute-0 nova_compute[257087]: 2025-12-05 10:31:19.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:31:19 compute-0 nova_compute[257087]: 2025-12-05 10:31:19.524 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:31:19 compute-0 nova_compute[257087]: 2025-12-05 10:31:19.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:31:19 compute-0 nova_compute[257087]: 2025-12-05 10:31:19.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:31:19 compute-0 nova_compute[257087]: 2025-12-05 10:31:19.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:31:19 compute-0 nova_compute[257087]: 2025-12-05 10:31:19.846 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:31:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:31:20 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3201018702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:31:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:31:20.591 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:31:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:31:20.592 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:31:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:31:20.592 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:31:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:20.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:21.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:31:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:22.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:22 compute-0 nova_compute[257087]: 2025-12-05 10:31:22.996 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:23 compute-0 ceph-mon[74418]: pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:31:23 compute-0 ceph-mon[74418]: pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:31:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:23.814Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:31:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:24.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:25 compute-0 ceph-mon[74418]: pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:31:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:25.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:25] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:31:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:25] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:31:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:26.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:27.535Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:31:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:27.535Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:31:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:27.535Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:31:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:31:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:31:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:31:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:31:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:31:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:31:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:31:27
Dec 05 10:31:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:31:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:31:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.control', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', '.nfs', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'cephfs.cephfs.data']
Dec 05 10:31:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:31:27 compute-0 nova_compute[257087]: 2025-12-05 10:31:27.999 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:31:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:28 compute-0 nova_compute[257087]: 2025-12-05 10:31:27.999 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:28 compute-0 nova_compute[257087]: 2025-12-05 10:31:28.000 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:31:28 compute-0 nova_compute[257087]: 2025-12-05 10:31:28.000 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:28 compute-0 nova_compute[257087]: 2025-12-05 10:31:28.000 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:28 compute-0 nova_compute[257087]: 2025-12-05 10:31:28.001 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:31:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:31:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:28.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:31:28 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:31:28 compute-0 ceph-mon[74418]: pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:28 compute-0 ceph-mon[74418]: pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:31:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:28.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:30.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:31.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:32.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:33 compute-0 nova_compute[257087]: 2025-12-05 10:31:33.003 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:31:33 compute-0 nova_compute[257087]: 2025-12-05 10:31:33.004 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:31:33 compute-0 nova_compute[257087]: 2025-12-05 10:31:33.005 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:31:33 compute-0 nova_compute[257087]: 2025-12-05 10:31:33.005 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:33 compute-0 nova_compute[257087]: 2025-12-05 10:31:33.054 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:33 compute-0 nova_compute[257087]: 2025-12-05 10:31:33.056 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:33 compute-0 ceph-mon[74418]: pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:33.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:33.815Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:34 compute-0 ceph-mon[74418]: pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:34 compute-0 ceph-mon[74418]: pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:31:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:34.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:31:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:35.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:35] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Dec 05 10:31:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:35] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Dec 05 10:31:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:36 compute-0 ceph-mon[74418]: pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:36.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:37.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:37 compute-0 sudo[289775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:31:37 compute-0 sudo[289775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:31:37 compute-0 sudo[289775]: pam_unix(sudo:session): session closed for user root
Dec 05 10:31:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:37.536Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:38 compute-0 nova_compute[257087]: 2025-12-05 10:31:38.056 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:31:38 compute-0 nova_compute[257087]: 2025-12-05 10:31:38.058 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:31:38 compute-0 nova_compute[257087]: 2025-12-05 10:31:38.058 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:31:38 compute-0 nova_compute[257087]: 2025-12-05 10:31:38.058 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:38 compute-0 nova_compute[257087]: 2025-12-05 10:31:38.113 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:38 compute-0 nova_compute[257087]: 2025-12-05 10:31:38.114 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:38 compute-0 ceph-mon[74418]: pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:38.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:38.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:31:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:38.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.003000080s ======
Dec 05 10:31:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 05 10:31:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:40 compute-0 ceph-mon[74418]: pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:40 compute-0 podman[289802]: 2025-12-05 10:31:40.391015819 +0000 UTC m=+0.057628776 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:31:40 compute-0 podman[289804]: 2025-12-05 10:31:40.408466355 +0000 UTC m=+0.067973090 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible)
Dec 05 10:31:40 compute-0 podman[289803]: 2025-12-05 10:31:40.426935606 +0000 UTC m=+0.089230857 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 10:31:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:40.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:31:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:41.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:31:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:42.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:31:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:31:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:43 compute-0 nova_compute[257087]: 2025-12-05 10:31:43.115 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:43 compute-0 ceph-mon[74418]: pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:31:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:43.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:43.816Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:44 compute-0 ceph-mon[74418]: pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:44.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:45.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:45] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:31:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:45] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:31:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:46 compute-0 ceph-mon[74418]: pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:47.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:47.537Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:48 compute-0 nova_compute[257087]: 2025-12-05 10:31:48.117 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:31:48 compute-0 nova_compute[257087]: 2025-12-05 10:31:48.118 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:31:48 compute-0 nova_compute[257087]: 2025-12-05 10:31:48.118 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:31:48 compute-0 nova_compute[257087]: 2025-12-05 10:31:48.119 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:48 compute-0 nova_compute[257087]: 2025-12-05 10:31:48.178 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:48 compute-0 nova_compute[257087]: 2025-12-05 10:31:48.179 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:48 compute-0 ceph-mon[74418]: pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:48.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:48.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:31:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:49.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:31:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:50 compute-0 ceph-mon[74418]: pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:50.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:51.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:52 compute-0 ceph-mon[74418]: pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:52.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:53 compute-0 nova_compute[257087]: 2025-12-05 10:31:53.181 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:31:53 compute-0 nova_compute[257087]: 2025-12-05 10:31:53.183 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:31:53 compute-0 nova_compute[257087]: 2025-12-05 10:31:53.183 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:31:53 compute-0 nova_compute[257087]: 2025-12-05 10:31:53.184 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:53 compute-0 nova_compute[257087]: 2025-12-05 10:31:53.221 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:53 compute-0 nova_compute[257087]: 2025-12-05 10:31:53.221 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:53.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:53.818Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:54 compute-0 ceph-mon[74418]: pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:31:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:54.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:55.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:55] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:31:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:31:55] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:31:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:56 compute-0 ceph-mon[74418]: pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:56.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:31:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:31:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1433937813' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:31:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:31:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1433937813' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:31:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1433937813' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:31:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1433937813' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:31:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:57.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:57 compute-0 sudo[289885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:31:57 compute-0 sudo[289885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:31:57 compute-0 sudo[289885]: pam_unix(sudo:session): session closed for user root
Dec 05 10:31:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:57.539Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:31:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:31:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:31:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:31:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:31:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:31:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:31:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:31:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:31:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:31:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:31:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:31:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:31:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:58 compute-0 nova_compute[257087]: 2025-12-05 10:31:58.223 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:31:58 compute-0 nova_compute[257087]: 2025-12-05 10:31:58.224 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:31:58 compute-0 nova_compute[257087]: 2025-12-05 10:31:58.224 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:31:58 compute-0 nova_compute[257087]: 2025-12-05 10:31:58.224 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:58 compute-0 nova_compute[257087]: 2025-12-05 10:31:58.224 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:31:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:31:58 compute-0 ceph-mon[74418]: pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:31:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:31:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:31:58.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:31:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:31:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:58.915Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:31:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:31:58.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:31:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:31:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:31:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:31:59.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:00 compute-0 ceph-mon[74418]: pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:00.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:32:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:01.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:32:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:02 compute-0 ceph-mon[74418]: pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:02.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:03 compute-0 nova_compute[257087]: 2025-12-05 10:32:03.226 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:03 compute-0 nova_compute[257087]: 2025-12-05 10:32:03.228 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:03 compute-0 nova_compute[257087]: 2025-12-05 10:32:03.228 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:32:03 compute-0 nova_compute[257087]: 2025-12-05 10:32:03.228 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:03 compute-0 nova_compute[257087]: 2025-12-05 10:32:03.266 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:03 compute-0 nova_compute[257087]: 2025-12-05 10:32:03.267 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:32:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:03.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:32:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:03.819Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:04 compute-0 ceph-mon[74418]: pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:04.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:04 compute-0 sudo[289918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:32:04 compute-0 sudo[289918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:04 compute-0 sudo[289918]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:04 compute-0 sudo[289943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:32:04 compute-0 sudo[289943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:05.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:05 compute-0 sudo[289943]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:32:05 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:32:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:32:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:32:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:32:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:32:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:05] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:32:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:05] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:32:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:32:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:32:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:32:05 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:32:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:32:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:32:05 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:32:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:32:05 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:32:05 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:32:05 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:32:05 compute-0 sudo[290001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:32:05 compute-0 sudo[290001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:05 compute-0 sudo[290001]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:05 compute-0 sudo[290026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:32:05 compute-0 sudo[290026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:06 compute-0 podman[290092]: 2025-12-05 10:32:06.334171438 +0000 UTC m=+0.044837600 container create 5f00268fae6b6d5d3631140e426b1da194b541fc053d295db0165d4057c84673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 05 10:32:06 compute-0 systemd[1]: Started libpod-conmon-5f00268fae6b6d5d3631140e426b1da194b541fc053d295db0165d4057c84673.scope.
Dec 05 10:32:06 compute-0 podman[290092]: 2025-12-05 10:32:06.314482553 +0000 UTC m=+0.025148745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:32:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:32:06 compute-0 podman[290092]: 2025-12-05 10:32:06.445384611 +0000 UTC m=+0.156050813 container init 5f00268fae6b6d5d3631140e426b1da194b541fc053d295db0165d4057c84673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:32:06 compute-0 podman[290092]: 2025-12-05 10:32:06.453619205 +0000 UTC m=+0.164285367 container start 5f00268fae6b6d5d3631140e426b1da194b541fc053d295db0165d4057c84673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:32:06 compute-0 podman[290092]: 2025-12-05 10:32:06.457731676 +0000 UTC m=+0.168397878 container attach 5f00268fae6b6d5d3631140e426b1da194b541fc053d295db0165d4057c84673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:32:06 compute-0 admiring_cohen[290108]: 167 167
Dec 05 10:32:06 compute-0 systemd[1]: libpod-5f00268fae6b6d5d3631140e426b1da194b541fc053d295db0165d4057c84673.scope: Deactivated successfully.
Dec 05 10:32:06 compute-0 podman[290092]: 2025-12-05 10:32:06.459957647 +0000 UTC m=+0.170623809 container died 5f00268fae6b6d5d3631140e426b1da194b541fc053d295db0165d4057c84673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 10:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1006662353f595d3fb9d9a49ec714d7a5b2af316bdc66e47b3238b9ad67bf7e6-merged.mount: Deactivated successfully.
Dec 05 10:32:06 compute-0 podman[290092]: 2025-12-05 10:32:06.498348941 +0000 UTC m=+0.209015113 container remove 5f00268fae6b6d5d3631140e426b1da194b541fc053d295db0165d4057c84673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:32:06 compute-0 systemd[1]: libpod-conmon-5f00268fae6b6d5d3631140e426b1da194b541fc053d295db0165d4057c84673.scope: Deactivated successfully.
Dec 05 10:32:06 compute-0 podman[290133]: 2025-12-05 10:32:06.667779526 +0000 UTC m=+0.044718967 container create 979b2e34f2065e424e5bcbab07d4bbc4b8a39ab227b0dcb3f5f4120978d13acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gates, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 10:32:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:06.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:06 compute-0 systemd[1]: Started libpod-conmon-979b2e34f2065e424e5bcbab07d4bbc4b8a39ab227b0dcb3f5f4120978d13acc.scope.
Dec 05 10:32:06 compute-0 ceph-mon[74418]: pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:32:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:32:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:32:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:32:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:32:06 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:32:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55478a63a7d0506bb7a5374520e20e6c22e2c14fdb11eb10b9fdeb813d7be0db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:06 compute-0 podman[290133]: 2025-12-05 10:32:06.650002093 +0000 UTC m=+0.026941564 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55478a63a7d0506bb7a5374520e20e6c22e2c14fdb11eb10b9fdeb813d7be0db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55478a63a7d0506bb7a5374520e20e6c22e2c14fdb11eb10b9fdeb813d7be0db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55478a63a7d0506bb7a5374520e20e6c22e2c14fdb11eb10b9fdeb813d7be0db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55478a63a7d0506bb7a5374520e20e6c22e2c14fdb11eb10b9fdeb813d7be0db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:06 compute-0 podman[290133]: 2025-12-05 10:32:06.755630584 +0000 UTC m=+0.132570035 container init 979b2e34f2065e424e5bcbab07d4bbc4b8a39ab227b0dcb3f5f4120978d13acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gates, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 10:32:06 compute-0 podman[290133]: 2025-12-05 10:32:06.765316157 +0000 UTC m=+0.142255608 container start 979b2e34f2065e424e5bcbab07d4bbc4b8a39ab227b0dcb3f5f4120978d13acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:32:06 compute-0 podman[290133]: 2025-12-05 10:32:06.769041518 +0000 UTC m=+0.145980969 container attach 979b2e34f2065e424e5bcbab07d4bbc4b8a39ab227b0dcb3f5f4120978d13acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gates, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Dec 05 10:32:07 compute-0 distracted_gates[290150]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:32:07 compute-0 distracted_gates[290150]: --> All data devices are unavailable
Dec 05 10:32:07 compute-0 systemd[1]: libpod-979b2e34f2065e424e5bcbab07d4bbc4b8a39ab227b0dcb3f5f4120978d13acc.scope: Deactivated successfully.
Dec 05 10:32:07 compute-0 podman[290133]: 2025-12-05 10:32:07.166673566 +0000 UTC m=+0.543613017 container died 979b2e34f2065e424e5bcbab07d4bbc4b8a39ab227b0dcb3f5f4120978d13acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gates, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Dec 05 10:32:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-55478a63a7d0506bb7a5374520e20e6c22e2c14fdb11eb10b9fdeb813d7be0db-merged.mount: Deactivated successfully.
Dec 05 10:32:07 compute-0 podman[290133]: 2025-12-05 10:32:07.20983406 +0000 UTC m=+0.586773501 container remove 979b2e34f2065e424e5bcbab07d4bbc4b8a39ab227b0dcb3f5f4120978d13acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 10:32:07 compute-0 systemd[1]: libpod-conmon-979b2e34f2065e424e5bcbab07d4bbc4b8a39ab227b0dcb3f5f4120978d13acc.scope: Deactivated successfully.
Dec 05 10:32:07 compute-0 sudo[290026]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:07.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:07 compute-0 sudo[290178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:32:07 compute-0 sudo[290178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:07 compute-0 sudo[290178]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:07 compute-0 sudo[290203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:32:07 compute-0 sudo[290203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:32:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:07.540Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:07 compute-0 podman[290268]: 2025-12-05 10:32:07.863796465 +0000 UTC m=+0.040886912 container create a9ad92a0695827b18f00d25224b10e5891e70a98aac76566ac8b079fc8c88fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hertz, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 05 10:32:07 compute-0 systemd[1]: Started libpod-conmon-a9ad92a0695827b18f00d25224b10e5891e70a98aac76566ac8b079fc8c88fa8.scope.
Dec 05 10:32:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:32:07 compute-0 podman[290268]: 2025-12-05 10:32:07.938398373 +0000 UTC m=+0.115488840 container init a9ad92a0695827b18f00d25224b10e5891e70a98aac76566ac8b079fc8c88fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 10:32:07 compute-0 podman[290268]: 2025-12-05 10:32:07.846157555 +0000 UTC m=+0.023248033 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:32:07 compute-0 podman[290268]: 2025-12-05 10:32:07.947948092 +0000 UTC m=+0.125038549 container start a9ad92a0695827b18f00d25224b10e5891e70a98aac76566ac8b079fc8c88fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hertz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 05 10:32:07 compute-0 podman[290268]: 2025-12-05 10:32:07.95156743 +0000 UTC m=+0.128657917 container attach a9ad92a0695827b18f00d25224b10e5891e70a98aac76566ac8b079fc8c88fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 10:32:07 compute-0 laughing_hertz[290284]: 167 167
Dec 05 10:32:07 compute-0 systemd[1]: libpod-a9ad92a0695827b18f00d25224b10e5891e70a98aac76566ac8b079fc8c88fa8.scope: Deactivated successfully.
Dec 05 10:32:07 compute-0 podman[290268]: 2025-12-05 10:32:07.954628334 +0000 UTC m=+0.131718811 container died a9ad92a0695827b18f00d25224b10e5891e70a98aac76566ac8b079fc8c88fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 05 10:32:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-09549ece5d094585e153b7d42ff6c78e89309788c5072f2ffe631cdec188aba6-merged.mount: Deactivated successfully.
Dec 05 10:32:07 compute-0 podman[290268]: 2025-12-05 10:32:07.987446925 +0000 UTC m=+0.164537392 container remove a9ad92a0695827b18f00d25224b10e5891e70a98aac76566ac8b079fc8c88fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hertz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 10:32:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:08 compute-0 systemd[1]: libpod-conmon-a9ad92a0695827b18f00d25224b10e5891e70a98aac76566ac8b079fc8c88fa8.scope: Deactivated successfully.
Dec 05 10:32:08 compute-0 podman[290309]: 2025-12-05 10:32:08.194611476 +0000 UTC m=+0.046905656 container create 3521dc0d07cf10d97a4c9d4abad715c33461582f54fa0bd983622c22905d1b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:32:08 compute-0 systemd[1]: Started libpod-conmon-3521dc0d07cf10d97a4c9d4abad715c33461582f54fa0bd983622c22905d1b3c.scope.
Dec 05 10:32:08 compute-0 nova_compute[257087]: 2025-12-05 10:32:08.268 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:08 compute-0 nova_compute[257087]: 2025-12-05 10:32:08.271 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:08 compute-0 nova_compute[257087]: 2025-12-05 10:32:08.271 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:32:08 compute-0 nova_compute[257087]: 2025-12-05 10:32:08.271 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:08 compute-0 podman[290309]: 2025-12-05 10:32:08.177625995 +0000 UTC m=+0.029920205 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:32:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:32:08 compute-0 nova_compute[257087]: 2025-12-05 10:32:08.305 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:08 compute-0 nova_compute[257087]: 2025-12-05 10:32:08.306 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea87a479cab4ee94b9524d560ac407c8f438296fd3dcd0c714d1a9d325d845c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea87a479cab4ee94b9524d560ac407c8f438296fd3dcd0c714d1a9d325d845c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea87a479cab4ee94b9524d560ac407c8f438296fd3dcd0c714d1a9d325d845c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea87a479cab4ee94b9524d560ac407c8f438296fd3dcd0c714d1a9d325d845c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:08 compute-0 podman[290309]: 2025-12-05 10:32:08.323224102 +0000 UTC m=+0.175518292 container init 3521dc0d07cf10d97a4c9d4abad715c33461582f54fa0bd983622c22905d1b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:32:08 compute-0 podman[290309]: 2025-12-05 10:32:08.330806409 +0000 UTC m=+0.183100589 container start 3521dc0d07cf10d97a4c9d4abad715c33461582f54fa0bd983622c22905d1b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:32:08 compute-0 podman[290309]: 2025-12-05 10:32:08.338631182 +0000 UTC m=+0.190925392 container attach 3521dc0d07cf10d97a4c9d4abad715c33461582f54fa0bd983622c22905d1b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_kepler, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 10:32:08 compute-0 fervent_kepler[290325]: {
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:     "1": [
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:         {
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             "devices": [
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "/dev/loop3"
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             ],
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             "lv_name": "ceph_lv0",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             "lv_size": "21470642176",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             "name": "ceph_lv0",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             "tags": {
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.cluster_name": "ceph",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.crush_device_class": "",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.encrypted": "0",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.osd_id": "1",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.type": "block",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.vdo": "0",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:                 "ceph.with_tpm": "0"
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             },
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             "type": "block",
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:             "vg_name": "ceph_vg0"
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:         }
Dec 05 10:32:08 compute-0 fervent_kepler[290325]:     ]
Dec 05 10:32:08 compute-0 fervent_kepler[290325]: }
Dec 05 10:32:08 compute-0 systemd[1]: libpod-3521dc0d07cf10d97a4c9d4abad715c33461582f54fa0bd983622c22905d1b3c.scope: Deactivated successfully.
Dec 05 10:32:08 compute-0 podman[290309]: 2025-12-05 10:32:08.626001522 +0000 UTC m=+0.478295702 container died 3521dc0d07cf10d97a4c9d4abad715c33461582f54fa0bd983622c22905d1b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:32:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dea87a479cab4ee94b9524d560ac407c8f438296fd3dcd0c714d1a9d325d845c-merged.mount: Deactivated successfully.
Dec 05 10:32:08 compute-0 podman[290309]: 2025-12-05 10:32:08.671291493 +0000 UTC m=+0.523585663 container remove 3521dc0d07cf10d97a4c9d4abad715c33461582f54fa0bd983622c22905d1b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 10:32:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:08.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:08 compute-0 systemd[1]: libpod-conmon-3521dc0d07cf10d97a4c9d4abad715c33461582f54fa0bd983622c22905d1b3c.scope: Deactivated successfully.
Dec 05 10:32:08 compute-0 sudo[290203]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:08 compute-0 sudo[290347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:32:08 compute-0 sudo[290347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:08 compute-0 sudo[290347]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:08 compute-0 ceph-mon[74418]: pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:32:08 compute-0 sudo[290372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:32:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:08.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:08 compute-0 sudo[290372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:09.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:09 compute-0 podman[290435]: 2025-12-05 10:32:09.412400657 +0000 UTC m=+0.048455048 container create 0843f9a3023adaa43acb6d248a6ad8cea7f876f20900e820a084d01138ab5ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:32:09 compute-0 systemd[1]: Started libpod-conmon-0843f9a3023adaa43acb6d248a6ad8cea7f876f20900e820a084d01138ab5ba7.scope.
Dec 05 10:32:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:32:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:32:09 compute-0 podman[290435]: 2025-12-05 10:32:09.393482793 +0000 UTC m=+0.029537204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:32:09 compute-0 podman[290435]: 2025-12-05 10:32:09.535855893 +0000 UTC m=+0.171910294 container init 0843f9a3023adaa43acb6d248a6ad8cea7f876f20900e820a084d01138ab5ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:32:09 compute-0 podman[290435]: 2025-12-05 10:32:09.543969623 +0000 UTC m=+0.180024004 container start 0843f9a3023adaa43acb6d248a6ad8cea7f876f20900e820a084d01138ab5ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:32:09 compute-0 podman[290435]: 2025-12-05 10:32:09.548448216 +0000 UTC m=+0.184502647 container attach 0843f9a3023adaa43acb6d248a6ad8cea7f876f20900e820a084d01138ab5ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 10:32:09 compute-0 laughing_haslett[290451]: 167 167
Dec 05 10:32:09 compute-0 systemd[1]: libpod-0843f9a3023adaa43acb6d248a6ad8cea7f876f20900e820a084d01138ab5ba7.scope: Deactivated successfully.
Dec 05 10:32:09 compute-0 podman[290435]: 2025-12-05 10:32:09.550936372 +0000 UTC m=+0.186990753 container died 0843f9a3023adaa43acb6d248a6ad8cea7f876f20900e820a084d01138ab5ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 10:32:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0f9ddbcb3a1a94844db8ce6b8118b876f32c84834d1faeea294f41cdfa26971-merged.mount: Deactivated successfully.
Dec 05 10:32:09 compute-0 podman[290435]: 2025-12-05 10:32:09.602221837 +0000 UTC m=+0.238276218 container remove 0843f9a3023adaa43acb6d248a6ad8cea7f876f20900e820a084d01138ab5ba7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:32:09 compute-0 systemd[1]: libpod-conmon-0843f9a3023adaa43acb6d248a6ad8cea7f876f20900e820a084d01138ab5ba7.scope: Deactivated successfully.
Dec 05 10:32:09 compute-0 podman[290475]: 2025-12-05 10:32:09.755837643 +0000 UTC m=+0.028865047 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:32:10 compute-0 podman[290475]: 2025-12-05 10:32:10.373416079 +0000 UTC m=+0.646443463 container create afa95020c77945f403ba46dbbe6b667bec00ec3c6280412d246120f470413983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 05 10:32:10 compute-0 systemd[1]: Started libpod-conmon-afa95020c77945f403ba46dbbe6b667bec00ec3c6280412d246120f470413983.scope.
Dec 05 10:32:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:32:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efcb74578313a0956c5fd90a3ca5f328bb0faf8bc7ea8b0f58f8403d1834bbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efcb74578313a0956c5fd90a3ca5f328bb0faf8bc7ea8b0f58f8403d1834bbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efcb74578313a0956c5fd90a3ca5f328bb0faf8bc7ea8b0f58f8403d1834bbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:10 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:32:10 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:32:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efcb74578313a0956c5fd90a3ca5f328bb0faf8bc7ea8b0f58f8403d1834bbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:32:10 compute-0 podman[290475]: 2025-12-05 10:32:10.519818968 +0000 UTC m=+0.792846372 container init afa95020c77945f403ba46dbbe6b667bec00ec3c6280412d246120f470413983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_booth, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Dec 05 10:32:10 compute-0 podman[290475]: 2025-12-05 10:32:10.528822992 +0000 UTC m=+0.801850376 container start afa95020c77945f403ba46dbbe6b667bec00ec3c6280412d246120f470413983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_booth, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:32:10 compute-0 podman[290475]: 2025-12-05 10:32:10.536899882 +0000 UTC m=+0.809927266 container attach afa95020c77945f403ba46dbbe6b667bec00ec3c6280412d246120f470413983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:32:10 compute-0 ceph-mon[74418]: pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:32:10 compute-0 podman[290492]: 2025-12-05 10:32:10.568848571 +0000 UTC m=+0.082000721 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:32:10 compute-0 podman[290496]: 2025-12-05 10:32:10.572534461 +0000 UTC m=+0.087248443 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3)
Dec 05 10:32:10 compute-0 podman[290495]: 2025-12-05 10:32:10.62916217 +0000 UTC m=+0.143702347 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:32:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:10.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:11 compute-0 lvm[290630]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:32:11 compute-0 lvm[290630]: VG ceph_vg0 finished
Dec 05 10:32:11 compute-0 reverent_booth[290491]: {}
Dec 05 10:32:11 compute-0 systemd[1]: libpod-afa95020c77945f403ba46dbbe6b667bec00ec3c6280412d246120f470413983.scope: Deactivated successfully.
Dec 05 10:32:11 compute-0 systemd[1]: libpod-afa95020c77945f403ba46dbbe6b667bec00ec3c6280412d246120f470413983.scope: Consumed 1.168s CPU time.
Dec 05 10:32:11 compute-0 podman[290475]: 2025-12-05 10:32:11.25093035 +0000 UTC m=+1.523957734 container died afa95020c77945f403ba46dbbe6b667bec00ec3c6280412d246120f470413983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_booth, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 10:32:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8efcb74578313a0956c5fd90a3ca5f328bb0faf8bc7ea8b0f58f8403d1834bbf-merged.mount: Deactivated successfully.
Dec 05 10:32:11 compute-0 podman[290475]: 2025-12-05 10:32:11.302584624 +0000 UTC m=+1.575612008 container remove afa95020c77945f403ba46dbbe6b667bec00ec3c6280412d246120f470413983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_booth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 05 10:32:11 compute-0 systemd[1]: libpod-conmon-afa95020c77945f403ba46dbbe6b667bec00ec3c6280412d246120f470413983.scope: Deactivated successfully.
Dec 05 10:32:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:11.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:11 compute-0 sudo[290372]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:32:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:32:11 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:32:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:32:11 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:32:11 compute-0 sudo[290645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:32:11 compute-0 sudo[290645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:11 compute-0 sudo[290645]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:12 compute-0 ceph-mon[74418]: pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:32:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:32:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:32:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:32:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:32:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:12.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:13 compute-0 nova_compute[257087]: 2025-12-05 10:32:13.307 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:13 compute-0 nova_compute[257087]: 2025-12-05 10:32:13.308 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:13 compute-0 nova_compute[257087]: 2025-12-05 10:32:13.308 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:32:13 compute-0 nova_compute[257087]: 2025-12-05 10:32:13.308 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:13 compute-0 nova_compute[257087]: 2025-12-05 10:32:13.309 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:13 compute-0 nova_compute[257087]: 2025-12-05 10:32:13.309 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:13.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:32:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:32:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:13.820Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:14 compute-0 nova_compute[257087]: 2025-12-05 10:32:14.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:32:14 compute-0 nova_compute[257087]: 2025-12-05 10:32:14.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:32:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:14.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:14 compute-0 ceph-mon[74418]: pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:32:14 compute-0 nova_compute[257087]: 2025-12-05 10:32:14.747 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:32:14 compute-0 nova_compute[257087]: 2025-12-05 10:32:14.748 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:32:14 compute-0 nova_compute[257087]: 2025-12-05 10:32:14.748 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:32:14 compute-0 nova_compute[257087]: 2025-12-05 10:32:14.748 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:32:14 compute-0 nova_compute[257087]: 2025-12-05 10:32:14.749 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:32:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:32:15 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4029789541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:32:15 compute-0 nova_compute[257087]: 2025-12-05 10:32:15.231 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:32:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:15.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:15 compute-0 nova_compute[257087]: 2025-12-05 10:32:15.397 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:32:15 compute-0 nova_compute[257087]: 2025-12-05 10:32:15.398 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4434MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:32:15 compute-0 nova_compute[257087]: 2025-12-05 10:32:15.398 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:32:15 compute-0 nova_compute[257087]: 2025-12-05 10:32:15.399 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:32:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:32:15 compute-0 nova_compute[257087]: 2025-12-05 10:32:15.495 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:32:15 compute-0 nova_compute[257087]: 2025-12-05 10:32:15.495 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:32:15 compute-0 nova_compute[257087]: 2025-12-05 10:32:15.516 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:32:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:15] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:32:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:15] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:32:15 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4029789541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:32:15 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1226716502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:32:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:32:15 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1576386131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:32:15 compute-0 nova_compute[257087]: 2025-12-05 10:32:15.970 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:32:15 compute-0 nova_compute[257087]: 2025-12-05 10:32:15.977 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:32:15 compute-0 nova_compute[257087]: 2025-12-05 10:32:15.998 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:32:16 compute-0 nova_compute[257087]: 2025-12-05 10:32:16.001 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:32:16 compute-0 nova_compute[257087]: 2025-12-05 10:32:16.001 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:32:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:16.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:16 compute-0 ceph-mon[74418]: pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:32:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1576386131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:32:16 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1064796542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:32:16 compute-0 nova_compute[257087]: 2025-12-05 10:32:16.997 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:32:16 compute-0 nova_compute[257087]: 2025-12-05 10:32:16.998 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:32:16 compute-0 nova_compute[257087]: 2025-12-05 10:32:16.998 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:32:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:17.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:17 compute-0 nova_compute[257087]: 2025-12-05 10:32:17.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:32:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:17.542Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:17 compute-0 sudo[290720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:32:17 compute-0 sudo[290720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:17 compute-0 sudo[290720]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:17 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3109848995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:32:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:18 compute-0 nova_compute[257087]: 2025-12-05 10:32:18.310 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:18 compute-0 nova_compute[257087]: 2025-12-05 10:32:18.312 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:18 compute-0 nova_compute[257087]: 2025-12-05 10:32:18.313 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:32:18 compute-0 nova_compute[257087]: 2025-12-05 10:32:18.313 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:18 compute-0 nova_compute[257087]: 2025-12-05 10:32:18.372 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:18 compute-0 nova_compute[257087]: 2025-12-05 10:32:18.373 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:18 compute-0 nova_compute[257087]: 2025-12-05 10:32:18.526 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:32:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:18.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:18 compute-0 ceph-mon[74418]: pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3430560744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:32:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:18.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:19.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:19 compute-0 nova_compute[257087]: 2025-12-05 10:32:19.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:32:19 compute-0 nova_compute[257087]: 2025-12-05 10:32:19.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:32:20 compute-0 nova_compute[257087]: 2025-12-05 10:32:20.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:32:20 compute-0 nova_compute[257087]: 2025-12-05 10:32:20.530 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:32:20 compute-0 nova_compute[257087]: 2025-12-05 10:32:20.530 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:32:20 compute-0 nova_compute[257087]: 2025-12-05 10:32:20.543 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:32:20 compute-0 nova_compute[257087]: 2025-12-05 10:32:20.543 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:32:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:32:20.593 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:32:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:32:20.594 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:32:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:32:20.594 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:32:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:20 compute-0 ceph-mon[74418]: pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:21.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:22.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:22 compute-0 ceph-mon[74418]: pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:23.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:23 compute-0 nova_compute[257087]: 2025-12-05 10:32:23.374 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:23 compute-0 nova_compute[257087]: 2025-12-05 10:32:23.376 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:23 compute-0 nova_compute[257087]: 2025-12-05 10:32:23.377 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:32:23 compute-0 nova_compute[257087]: 2025-12-05 10:32:23.377 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:23 compute-0 nova_compute[257087]: 2025-12-05 10:32:23.415 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:23 compute-0 nova_compute[257087]: 2025-12-05 10:32:23.416 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:23.821Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:24.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:24 compute-0 ceph-mon[74418]: pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:25.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:25] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:32:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:25] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:32:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:26.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:27 compute-0 ceph-mon[74418]: pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:27.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:27.544Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:32:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:32:27
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.log', '.mgr', '.nfs', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images']
Dec 05 10:32:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:32:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:32:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:32:28 compute-0 nova_compute[257087]: 2025-12-05 10:32:28.415 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:28.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:28.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:29 compute-0 ceph-mon[74418]: pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:29.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:30 compute-0 ceph-mon[74418]: pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:32:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:32:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:31.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:32 compute-0 ceph-mon[74418]: pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:32.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:33.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:33 compute-0 nova_compute[257087]: 2025-12-05 10:32:33.417 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:33 compute-0 nova_compute[257087]: 2025-12-05 10:32:33.419 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:33 compute-0 nova_compute[257087]: 2025-12-05 10:32:33.419 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:32:33 compute-0 nova_compute[257087]: 2025-12-05 10:32:33.419 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:33 compute-0 nova_compute[257087]: 2025-12-05 10:32:33.459 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:33 compute-0 nova_compute[257087]: 2025-12-05 10:32:33.460 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:33.822Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:32:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:33.822Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:32:34 compute-0 ceph-mon[74418]: pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:35.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:35] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec 05 10:32:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:35] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec 05 10:32:36 compute-0 ceph-mon[74418]: pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:36.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:37.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:37.545Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:37 compute-0 sudo[290765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:32:37 compute-0 sudo[290765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:37 compute-0 sudo[290765]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:38 compute-0 nova_compute[257087]: 2025-12-05 10:32:38.461 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:38 compute-0 nova_compute[257087]: 2025-12-05 10:32:38.463 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:38 compute-0 nova_compute[257087]: 2025-12-05 10:32:38.464 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:32:38 compute-0 nova_compute[257087]: 2025-12-05 10:32:38.464 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:38 compute-0 nova_compute[257087]: 2025-12-05 10:32:38.495 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:38 compute-0 nova_compute[257087]: 2025-12-05 10:32:38.496 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:38 compute-0 ceph-mon[74418]: pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:38.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:38.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:39.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:40 compute-0 ceph-mon[74418]: pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:32:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:40.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:32:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:41.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:32:41 compute-0 podman[290796]: 2025-12-05 10:32:41.419660923 +0000 UTC m=+0.067125094 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:32:41 compute-0 podman[290794]: 2025-12-05 10:32:41.442186786 +0000 UTC m=+0.091163119 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 10:32:41 compute-0 podman[290795]: 2025-12-05 10:32:41.461849701 +0000 UTC m=+0.111185604 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:32:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:32:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:32:42 compute-0 ceph-mon[74418]: pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:42.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:43.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:43 compute-0 nova_compute[257087]: 2025-12-05 10:32:43.495 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:43 compute-0 nova_compute[257087]: 2025-12-05 10:32:43.498 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:32:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:43.823Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:44 compute-0 ceph-mon[74418]: pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:44.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:45.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:32:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:45] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:32:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:45] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:32:46 compute-0 ceph-mon[74418]: pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:32:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:47.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:32:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:47.546Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:48 compute-0 nova_compute[257087]: 2025-12-05 10:32:48.498 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:48 compute-0 nova_compute[257087]: 2025-12-05 10:32:48.500 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:48.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:48.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:49 compute-0 ceph-mon[74418]: pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:32:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:49.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:32:49 compute-0 nova_compute[257087]: 2025-12-05 10:32:49.517 257094 DEBUG oslo_concurrency.processutils [None req-7a8b88bf-095a-4e70-980a-5e588d2d3fe4 de7293a0a9ae42589eb1abfd225592bb 096a8b53d5eb4713bd6967b82ab963be - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:32:49 compute-0 nova_compute[257087]: 2025-12-05 10:32:49.563 257094 DEBUG oslo_concurrency.processutils [None req-7a8b88bf-095a-4e70-980a-5e588d2d3fe4 de7293a0a9ae42589eb1abfd225592bb 096a8b53d5eb4713bd6967b82ab963be - - default default] CMD "env LANG=C uptime" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:32:50 compute-0 ceph-mon[74418]: pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:32:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:50.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:51.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:32:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:52.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:53 compute-0 ceph-mon[74418]: pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:32:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:53.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:53 compute-0 nova_compute[257087]: 2025-12-05 10:32:53.499 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:32:53 compute-0 nova_compute[257087]: 2025-12-05 10:32:53.500 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:53 compute-0 nova_compute[257087]: 2025-12-05 10:32:53.500 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:32:53 compute-0 nova_compute[257087]: 2025-12-05 10:32:53.501 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:53 compute-0 nova_compute[257087]: 2025-12-05 10:32:53.501 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:32:53 compute-0 nova_compute[257087]: 2025-12-05 10:32:53.502 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:32:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:53.825Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:32:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:53.825Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:54.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:55 compute-0 ceph-mon[74418]: pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:32:55 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:32:55.386 165250 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:45:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b2:22:9b:a6:37:19'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:32:55 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:32:55.387 165250 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:32:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:55 compute-0 nova_compute[257087]: 2025-12-05 10:32:55.445 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:55.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:32:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:55] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:32:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:32:55] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:32:56 compute-0 ceph-mon[74418]: pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:32:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:56.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/758297264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:32:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/758297264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:32:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:32:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:57.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:32:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:57.547Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:32:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:32:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:32:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:32:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:32:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:32:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:32:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:32:57 compute-0 sudo[290872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:32:57 compute-0 sudo[290872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:32:57 compute-0 sudo[290872]: pam_unix(sudo:session): session closed for user root
Dec 05 10:32:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:32:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:32:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:32:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:32:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:32:58 compute-0 ceph-mon[74418]: pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:32:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:32:58 compute-0 nova_compute[257087]: 2025-12-05 10:32:58.503 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:32:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:32:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:32:58.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:32:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:32:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:32:58.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:32:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:32:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:32:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:32:59.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:32:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:00 compute-0 ceph-mon[74418]: pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:00.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 05 10:33:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:01.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 05 10:33:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:02.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:03 compute-0 ceph-mon[74418]: pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:03.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:03 compute-0 nova_compute[257087]: 2025-12-05 10:33:03.504 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:03 compute-0 nova_compute[257087]: 2025-12-05 10:33:03.505 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:03 compute-0 nova_compute[257087]: 2025-12-05 10:33:03.506 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:33:03 compute-0 nova_compute[257087]: 2025-12-05 10:33:03.506 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:03 compute-0 nova_compute[257087]: 2025-12-05 10:33:03.506 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:03 compute-0 nova_compute[257087]: 2025-12-05 10:33:03.507 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:03 compute-0 nova_compute[257087]: 2025-12-05 10:33:03.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:33:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:03.827Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:33:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:03.827Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:33:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:04.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:05 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:33:05.390 165250 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41643524-e4b6-4069-ba08-6e5872c74bd3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:33:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:05.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:05] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:33:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:05] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:33:05 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec 05 10:33:05 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:05.750796) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:33:05 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec 05 10:33:05 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930785750975, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2115, "num_deletes": 251, "total_data_size": 4350160, "memory_usage": 4414160, "flush_reason": "Manual Compaction"}
Dec 05 10:33:05 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec 05 10:33:05 compute-0 ceph-mon[74418]: pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930786008124, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 4228562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35052, "largest_seqno": 37166, "table_properties": {"data_size": 4218807, "index_size": 6249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19598, "raw_average_key_size": 20, "raw_value_size": 4199478, "raw_average_value_size": 4360, "num_data_blocks": 265, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930558, "oldest_key_time": 1764930558, "file_creation_time": 1764930785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 257547 microseconds, and 12195 cpu microseconds.
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.008359) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 4228562 bytes OK
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.008425) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.039452) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.039527) EVENT_LOG_v1 {"time_micros": 1764930786039512, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.039569) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 4341524, prev total WAL file size 4341524, number of live WAL files 2.
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.042031) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(4129KB)], [77(11MB)]
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930786042331, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 15871657, "oldest_snapshot_seqno": -1}
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6914 keys, 13615358 bytes, temperature: kUnknown
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930786363796, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 13615358, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13570825, "index_size": 26117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 181921, "raw_average_key_size": 26, "raw_value_size": 13447852, "raw_average_value_size": 1945, "num_data_blocks": 1021, "num_entries": 6914, "num_filter_entries": 6914, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764930786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.364336) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13615358 bytes
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.560190) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.3 rd, 42.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 11.1 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7430, records dropped: 516 output_compression: NoCompression
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.560305) EVENT_LOG_v1 {"time_micros": 1764930786560228, "job": 44, "event": "compaction_finished", "compaction_time_micros": 321655, "compaction_time_cpu_micros": 67729, "output_level": 6, "num_output_files": 1, "total_output_size": 13615358, "num_input_records": 7430, "num_output_records": 6914, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930786562061, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930786566385, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.041735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.566490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.566496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.566498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.566500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:33:06 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:33:06.566502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:33:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:06.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:07 compute-0 ceph-mon[74418]: pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:07.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:07.548Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:08 compute-0 nova_compute[257087]: 2025-12-05 10:33:08.508 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:08 compute-0 nova_compute[257087]: 2025-12-05 10:33:08.511 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:08 compute-0 nova_compute[257087]: 2025-12-05 10:33:08.511 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:33:08 compute-0 nova_compute[257087]: 2025-12-05 10:33:08.511 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:08 compute-0 nova_compute[257087]: 2025-12-05 10:33:08.553 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:08 compute-0 nova_compute[257087]: 2025-12-05 10:33:08.555 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:33:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:08.925Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:33:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:08.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:33:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:09.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:10 compute-0 ceph-mon[74418]: pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:10.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:11.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:11 compute-0 ceph-mon[74418]: pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:11 compute-0 nova_compute[257087]: 2025-12-05 10:33:11.648 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:11 compute-0 nova_compute[257087]: 2025-12-05 10:33:11.648 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 10:33:11 compute-0 sudo[290911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:33:11 compute-0 sudo[290911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:11 compute-0 sudo[290911]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:11 compute-0 sudo[290954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:33:11 compute-0 sudo[290954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:11 compute-0 podman[290937]: 2025-12-05 10:33:11.986329603 +0000 UTC m=+0.077442486 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:33:11 compute-0 podman[290935]: 2025-12-05 10:33:11.992427928 +0000 UTC m=+0.092837834 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:33:12 compute-0 podman[290936]: 2025-12-05 10:33:12.053976461 +0000 UTC m=+0.154561461 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:33:12 compute-0 sudo[290954]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 05 10:33:12 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 10:33:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:33:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:33:12 compute-0 ceph-mon[74418]: pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:13.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:13 compute-0 nova_compute[257087]: 2025-12-05 10:33:13.556 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:13.828Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:13 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:33:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 10:33:14 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:33:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 10:33:14 compute-0 nova_compute[257087]: 2025-12-05 10:33:14.549 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:14.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 10:33:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:15 compute-0 ceph-mon[74418]: pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:15 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:15 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:33:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:15.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:33:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:15] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:33:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:15] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:33:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 05 10:33:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 10:33:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 10:33:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 10:33:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 05 10:33:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:16 compute-0 nova_compute[257087]: 2025-12-05 10:33:16.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:16 compute-0 nova_compute[257087]: 2025-12-05 10:33:16.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:16 compute-0 nova_compute[257087]: 2025-12-05 10:33:16.552 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:33:16 compute-0 nova_compute[257087]: 2025-12-05 10:33:16.553 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:33:16 compute-0 nova_compute[257087]: 2025-12-05 10:33:16.553 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:33:16 compute-0 nova_compute[257087]: 2025-12-05 10:33:16.553 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:33:16 compute-0 nova_compute[257087]: 2025-12-05 10:33:16.554 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:33:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 05 10:33:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 10:33:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:33:16 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:33:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:33:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:33:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:33:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:33:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:33:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:33:16 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:33:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:33:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:33:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:33:16 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:33:16 compute-0 sudo[291083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:33:16 compute-0 sudo[291083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:16 compute-0 sudo[291083]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:16 compute-0 sudo[291108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:33:16 compute-0 sudo[291108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:33:17 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279312253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.076 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.297 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.299 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4497MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.299 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.300 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:33:17 compute-0 ceph-mon[74418]: pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:17 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3514914497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:33:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 05 10:33:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:33:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:33:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:33:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:33:17 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:33:17 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4279312253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:33:17 compute-0 podman[291174]: 2025-12-05 10:33:17.433002398 +0000 UTC m=+0.048090548 container create 032c1f26c1baef67c963351aceec9c7bc9f379cf6c1d98bc3ae235f633885abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 10:33:17 compute-0 systemd[1]: Started libpod-conmon-032c1f26c1baef67c963351aceec9c7bc9f379cf6c1d98bc3ae235f633885abe.scope.
Dec 05 10:33:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:17.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.489 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.490 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:33:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:33:17 compute-0 podman[291174]: 2025-12-05 10:33:17.412892881 +0000 UTC m=+0.027981061 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.508 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing inventories for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:33:17 compute-0 podman[291174]: 2025-12-05 10:33:17.523125937 +0000 UTC m=+0.138214117 container init 032c1f26c1baef67c963351aceec9c7bc9f379cf6c1d98bc3ae235f633885abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:33:17 compute-0 podman[291174]: 2025-12-05 10:33:17.531936557 +0000 UTC m=+0.147024717 container start 032c1f26c1baef67c963351aceec9c7bc9f379cf6c1d98bc3ae235f633885abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mirzakhani, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 10:33:17 compute-0 podman[291174]: 2025-12-05 10:33:17.535824033 +0000 UTC m=+0.150912183 container attach 032c1f26c1baef67c963351aceec9c7bc9f379cf6c1d98bc3ae235f633885abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mirzakhani, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:33:17 compute-0 funny_mirzakhani[291190]: 167 167
Dec 05 10:33:17 compute-0 systemd[1]: libpod-032c1f26c1baef67c963351aceec9c7bc9f379cf6c1d98bc3ae235f633885abe.scope: Deactivated successfully.
Dec 05 10:33:17 compute-0 conmon[291190]: conmon 032c1f26c1baef67c963 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-032c1f26c1baef67c963351aceec9c7bc9f379cf6c1d98bc3ae235f633885abe.scope/container/memory.events
Dec 05 10:33:17 compute-0 podman[291174]: 2025-12-05 10:33:17.540495099 +0000 UTC m=+0.155583249 container died 032c1f26c1baef67c963351aceec9c7bc9f379cf6c1d98bc3ae235f633885abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mirzakhani, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:33:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:17.549Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4a6d3c0c63c4b57b3a8db19e36f7d0dadbbb62006b910d2db2d2013b138caa4-merged.mount: Deactivated successfully.
Dec 05 10:33:17 compute-0 podman[291174]: 2025-12-05 10:33:17.58317032 +0000 UTC m=+0.198258470 container remove 032c1f26c1baef67c963351aceec9c7bc9f379cf6c1d98bc3ae235f633885abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:33:17 compute-0 systemd[1]: libpod-conmon-032c1f26c1baef67c963351aceec9c7bc9f379cf6c1d98bc3ae235f633885abe.scope: Deactivated successfully.
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.597 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Updating ProviderTree inventory for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.598 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Updating inventory in ProviderTree for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.622 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing aggregate associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.648 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing trait associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, traits: HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:33:17 compute-0 nova_compute[257087]: 2025-12-05 10:33:17.668 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:33:17 compute-0 podman[291215]: 2025-12-05 10:33:17.769475703 +0000 UTC m=+0.048546040 container create 75a5c0536399096c2a72b897b31b0463bf11462970018f110b1bdd8454c82d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_archimedes, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:33:17 compute-0 systemd[1]: Started libpod-conmon-75a5c0536399096c2a72b897b31b0463bf11462970018f110b1bdd8454c82d6e.scope.
Dec 05 10:33:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:33:17 compute-0 sudo[291229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9790cae1c38969c3bf1369a07fb1c39de43fdceefb3a380b105de0bedb626966/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:17 compute-0 sudo[291229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9790cae1c38969c3bf1369a07fb1c39de43fdceefb3a380b105de0bedb626966/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9790cae1c38969c3bf1369a07fb1c39de43fdceefb3a380b105de0bedb626966/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9790cae1c38969c3bf1369a07fb1c39de43fdceefb3a380b105de0bedb626966/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9790cae1c38969c3bf1369a07fb1c39de43fdceefb3a380b105de0bedb626966/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:17 compute-0 podman[291215]: 2025-12-05 10:33:17.747705451 +0000 UTC m=+0.026775798 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:33:17 compute-0 sudo[291229]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:17 compute-0 podman[291215]: 2025-12-05 10:33:17.865781191 +0000 UTC m=+0.144851538 container init 75a5c0536399096c2a72b897b31b0463bf11462970018f110b1bdd8454c82d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_archimedes, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:33:17 compute-0 podman[291215]: 2025-12-05 10:33:17.873182332 +0000 UTC m=+0.152252659 container start 75a5c0536399096c2a72b897b31b0463bf11462970018f110b1bdd8454c82d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 10:33:17 compute-0 podman[291215]: 2025-12-05 10:33:17.876936194 +0000 UTC m=+0.156006521 container attach 75a5c0536399096c2a72b897b31b0463bf11462970018f110b1bdd8454c82d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_archimedes, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 10:33:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:33:18 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3460634190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:33:18 compute-0 nova_compute[257087]: 2025-12-05 10:33:18.186 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:33:18 compute-0 nova_compute[257087]: 2025-12-05 10:33:18.210 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:33:18 compute-0 heuristic_archimedes[291273]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:33:18 compute-0 heuristic_archimedes[291273]: --> All data devices are unavailable
Dec 05 10:33:18 compute-0 systemd[1]: libpod-75a5c0536399096c2a72b897b31b0463bf11462970018f110b1bdd8454c82d6e.scope: Deactivated successfully.
Dec 05 10:33:18 compute-0 podman[291215]: 2025-12-05 10:33:18.244918917 +0000 UTC m=+0.523989234 container died 75a5c0536399096c2a72b897b31b0463bf11462970018f110b1bdd8454c82d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_archimedes, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:33:18 compute-0 nova_compute[257087]: 2025-12-05 10:33:18.559 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:18 compute-0 nova_compute[257087]: 2025-12-05 10:33:18.718 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:33:18 compute-0 ceph-mon[74418]: pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:33:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/123933241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:33:18 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3460634190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:33:18 compute-0 nova_compute[257087]: 2025-12-05 10:33:18.721 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:33:18 compute-0 nova_compute[257087]: 2025-12-05 10:33:18.722 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-9790cae1c38969c3bf1369a07fb1c39de43fdceefb3a380b105de0bedb626966-merged.mount: Deactivated successfully.
Dec 05 10:33:18 compute-0 podman[291215]: 2025-12-05 10:33:18.760971473 +0000 UTC m=+1.040041820 container remove 75a5c0536399096c2a72b897b31b0463bf11462970018f110b1bdd8454c82d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_archimedes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 10:33:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:33:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:18.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:33:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:33:18 compute-0 sudo[291108]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:18 compute-0 systemd[1]: libpod-conmon-75a5c0536399096c2a72b897b31b0463bf11462970018f110b1bdd8454c82d6e.scope: Deactivated successfully.
Dec 05 10:33:18 compute-0 sudo[291306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:33:18 compute-0 sudo[291306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:18 compute-0 sudo[291306]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:33:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:18.927Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:18 compute-0 sudo[291331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:33:18 compute-0 sudo[291331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:19.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:19 compute-0 podman[291395]: 2025-12-05 10:33:19.395105489 +0000 UTC m=+0.027593280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:33:19 compute-0 nova_compute[257087]: 2025-12-05 10:33:19.720 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:19 compute-0 nova_compute[257087]: 2025-12-05 10:33:19.722 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:19 compute-0 nova_compute[257087]: 2025-12-05 10:33:19.722 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:20 compute-0 podman[291395]: 2025-12-05 10:33:20.216895437 +0000 UTC m=+0.849383208 container create 59cf95fc52aa0ce54298026a9a25dc00a95cf0728400573533c61718aa323ed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_grothendieck, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 10:33:20 compute-0 nova_compute[257087]: 2025-12-05 10:33:20.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:20 compute-0 nova_compute[257087]: 2025-12-05 10:33:20.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:33:20 compute-0 nova_compute[257087]: 2025-12-05 10:33:20.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:33:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:33:20.594 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:33:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:33:20.595 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:33:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:33:20.595 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:33:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:20.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:33:20 compute-0 systemd[1]: Started libpod-conmon-59cf95fc52aa0ce54298026a9a25dc00a95cf0728400573533c61718aa323ed8.scope.
Dec 05 10:33:20 compute-0 ceph-mon[74418]: pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:33:20 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3797829664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:33:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:33:20 compute-0 nova_compute[257087]: 2025-12-05 10:33:20.960 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:33:20 compute-0 nova_compute[257087]: 2025-12-05 10:33:20.961 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:20 compute-0 nova_compute[257087]: 2025-12-05 10:33:20.962 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 10:33:20 compute-0 podman[291395]: 2025-12-05 10:33:20.968707681 +0000 UTC m=+1.601195482 container init 59cf95fc52aa0ce54298026a9a25dc00a95cf0728400573533c61718aa323ed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 05 10:33:20 compute-0 podman[291395]: 2025-12-05 10:33:20.976928375 +0000 UTC m=+1.609416146 container start 59cf95fc52aa0ce54298026a9a25dc00a95cf0728400573533c61718aa323ed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_grothendieck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:33:20 compute-0 podman[291395]: 2025-12-05 10:33:20.98081573 +0000 UTC m=+1.613303501 container attach 59cf95fc52aa0ce54298026a9a25dc00a95cf0728400573533c61718aa323ed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_grothendieck, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:33:20 compute-0 affectionate_grothendieck[291413]: 167 167
Dec 05 10:33:20 compute-0 systemd[1]: libpod-59cf95fc52aa0ce54298026a9a25dc00a95cf0728400573533c61718aa323ed8.scope: Deactivated successfully.
Dec 05 10:33:20 compute-0 nova_compute[257087]: 2025-12-05 10:33:20.985 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 10:33:21 compute-0 podman[291418]: 2025-12-05 10:33:21.132172434 +0000 UTC m=+0.128064131 container died 59cf95fc52aa0ce54298026a9a25dc00a95cf0728400573533c61718aa323ed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_grothendieck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-aba1dd5137f559511210a8ef2ced57326df70cf1d8bf85a233fed5fcc6d89f7f-merged.mount: Deactivated successfully.
Dec 05 10:33:21 compute-0 podman[291418]: 2025-12-05 10:33:21.173531969 +0000 UTC m=+0.169423646 container remove 59cf95fc52aa0ce54298026a9a25dc00a95cf0728400573533c61718aa323ed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:33:21 compute-0 systemd[1]: libpod-conmon-59cf95fc52aa0ce54298026a9a25dc00a95cf0728400573533c61718aa323ed8.scope: Deactivated successfully.
Dec 05 10:33:21 compute-0 podman[291440]: 2025-12-05 10:33:21.380108804 +0000 UTC m=+0.051542952 container create 5adce9947aed830cebea2cac243986679cca2d13172b016cd49a1b531b18f4b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:33:21 compute-0 systemd[1]: Started libpod-conmon-5adce9947aed830cebea2cac243986679cca2d13172b016cd49a1b531b18f4b8.scope.
Dec 05 10:33:21 compute-0 podman[291440]: 2025-12-05 10:33:21.357923671 +0000 UTC m=+0.029357859 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:33:21 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98460cb5b44bbbc493af8cae2dc8f80cdd003f43bd1c34271bf93c4bad2e5771/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98460cb5b44bbbc493af8cae2dc8f80cdd003f43bd1c34271bf93c4bad2e5771/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98460cb5b44bbbc493af8cae2dc8f80cdd003f43bd1c34271bf93c4bad2e5771/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98460cb5b44bbbc493af8cae2dc8f80cdd003f43bd1c34271bf93c4bad2e5771/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:21 compute-0 podman[291440]: 2025-12-05 10:33:21.475035974 +0000 UTC m=+0.146470182 container init 5adce9947aed830cebea2cac243986679cca2d13172b016cd49a1b531b18f4b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_murdock, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 05 10:33:21 compute-0 podman[291440]: 2025-12-05 10:33:21.484302276 +0000 UTC m=+0.155736444 container start 5adce9947aed830cebea2cac243986679cca2d13172b016cd49a1b531b18f4b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_murdock, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 05 10:33:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:21.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:21 compute-0 podman[291440]: 2025-12-05 10:33:21.488655845 +0000 UTC m=+0.160090023 container attach 5adce9947aed830cebea2cac243986679cca2d13172b016cd49a1b531b18f4b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_murdock, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:33:21 compute-0 nova_compute[257087]: 2025-12-05 10:33:21.553 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:21 compute-0 nova_compute[257087]: 2025-12-05 10:33:21.554 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:33:21 compute-0 nova_compute[257087]: 2025-12-05 10:33:21.554 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]: {
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:     "1": [
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:         {
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             "devices": [
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "/dev/loop3"
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             ],
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             "lv_name": "ceph_lv0",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             "lv_size": "21470642176",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             "name": "ceph_lv0",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             "tags": {
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.cluster_name": "ceph",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.crush_device_class": "",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.encrypted": "0",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.osd_id": "1",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.type": "block",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.vdo": "0",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:                 "ceph.with_tpm": "0"
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             },
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             "type": "block",
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:             "vg_name": "ceph_vg0"
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:         }
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]:     ]
Dec 05 10:33:21 compute-0 hardcore_murdock[291457]: }
Dec 05 10:33:21 compute-0 systemd[1]: libpod-5adce9947aed830cebea2cac243986679cca2d13172b016cd49a1b531b18f4b8.scope: Deactivated successfully.
Dec 05 10:33:21 compute-0 conmon[291457]: conmon 5adce9947aed830cebea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5adce9947aed830cebea2cac243986679cca2d13172b016cd49a1b531b18f4b8.scope/container/memory.events
Dec 05 10:33:21 compute-0 podman[291466]: 2025-12-05 10:33:21.889642144 +0000 UTC m=+0.035087956 container died 5adce9947aed830cebea2cac243986679cca2d13172b016cd49a1b531b18f4b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 05 10:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-98460cb5b44bbbc493af8cae2dc8f80cdd003f43bd1c34271bf93c4bad2e5771-merged.mount: Deactivated successfully.
Dec 05 10:33:21 compute-0 podman[291466]: 2025-12-05 10:33:21.935608413 +0000 UTC m=+0.081054205 container remove 5adce9947aed830cebea2cac243986679cca2d13172b016cd49a1b531b18f4b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_murdock, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:33:21 compute-0 ceph-mon[74418]: pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:33:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1602116948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:33:21 compute-0 systemd[1]: libpod-conmon-5adce9947aed830cebea2cac243986679cca2d13172b016cd49a1b531b18f4b8.scope: Deactivated successfully.
Dec 05 10:33:22 compute-0 sudo[291331]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:22 compute-0 sudo[291481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:33:22 compute-0 sudo[291481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:22 compute-0 sudo[291481]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:22 compute-0 sudo[291506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:33:22 compute-0 sudo[291506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:22 compute-0 podman[291573]: 2025-12-05 10:33:22.58009245 +0000 UTC m=+0.055809998 container create c46c8c973440e7c5be685cfcd0be8871750237c5e345d5b389cb47ff40d91c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:33:22 compute-0 systemd[1]: Started libpod-conmon-c46c8c973440e7c5be685cfcd0be8871750237c5e345d5b389cb47ff40d91c31.scope.
Dec 05 10:33:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:33:22 compute-0 podman[291573]: 2025-12-05 10:33:22.560178019 +0000 UTC m=+0.035895587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:33:22 compute-0 podman[291573]: 2025-12-05 10:33:22.669794508 +0000 UTC m=+0.145512076 container init c46c8c973440e7c5be685cfcd0be8871750237c5e345d5b389cb47ff40d91c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_fermi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:33:22 compute-0 podman[291573]: 2025-12-05 10:33:22.676887752 +0000 UTC m=+0.152605290 container start c46c8c973440e7c5be685cfcd0be8871750237c5e345d5b389cb47ff40d91c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 05 10:33:22 compute-0 podman[291573]: 2025-12-05 10:33:22.680273363 +0000 UTC m=+0.155990941 container attach c46c8c973440e7c5be685cfcd0be8871750237c5e345d5b389cb47ff40d91c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_fermi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:33:22 compute-0 clever_fermi[291590]: 167 167
Dec 05 10:33:22 compute-0 podman[291573]: 2025-12-05 10:33:22.685833005 +0000 UTC m=+0.161550553 container died c46c8c973440e7c5be685cfcd0be8871750237c5e345d5b389cb47ff40d91c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:33:22 compute-0 systemd[1]: libpod-c46c8c973440e7c5be685cfcd0be8871750237c5e345d5b389cb47ff40d91c31.scope: Deactivated successfully.
Dec 05 10:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c4ed351fe3cbf4fb9bca9dd6ebe2817f052daf2dae17f48a17a74e4ac273a27-merged.mount: Deactivated successfully.
Dec 05 10:33:22 compute-0 podman[291573]: 2025-12-05 10:33:22.722484111 +0000 UTC m=+0.198201659 container remove c46c8c973440e7c5be685cfcd0be8871750237c5e345d5b389cb47ff40d91c31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:33:22 compute-0 systemd[1]: libpod-conmon-c46c8c973440e7c5be685cfcd0be8871750237c5e345d5b389cb47ff40d91c31.scope: Deactivated successfully.
Dec 05 10:33:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:22.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:33:22 compute-0 podman[291614]: 2025-12-05 10:33:22.901467886 +0000 UTC m=+0.031030785 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:33:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:33:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:23.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:33:23 compute-0 nova_compute[257087]: 2025-12-05 10:33:23.562 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:23 compute-0 nova_compute[257087]: 2025-12-05 10:33:23.565 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:23 compute-0 nova_compute[257087]: 2025-12-05 10:33:23.565 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:33:23 compute-0 nova_compute[257087]: 2025-12-05 10:33:23.565 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:23 compute-0 nova_compute[257087]: 2025-12-05 10:33:23.566 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:23 compute-0 nova_compute[257087]: 2025-12-05 10:33:23.567 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:23 compute-0 podman[291614]: 2025-12-05 10:33:23.573616276 +0000 UTC m=+0.703179135 container create b0bdd0df152721bb0057b39d711eebd7a7bd094dd5c94ea8191606033179d867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:33:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:23.829Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:33:24 compute-0 systemd[1]: Started libpod-conmon-b0bdd0df152721bb0057b39d711eebd7a7bd094dd5c94ea8191606033179d867.scope.
Dec 05 10:33:24 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9089bade6e88436f906aa58d423b1564d596b2db51b78dbe755840478afdb71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9089bade6e88436f906aa58d423b1564d596b2db51b78dbe755840478afdb71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9089bade6e88436f906aa58d423b1564d596b2db51b78dbe755840478afdb71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9089bade6e88436f906aa58d423b1564d596b2db51b78dbe755840478afdb71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:33:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:24.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:33:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:25.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:25] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:33:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:25] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec 05 10:33:25 compute-0 podman[291614]: 2025-12-05 10:33:25.849937118 +0000 UTC m=+2.979500057 container init b0bdd0df152721bb0057b39d711eebd7a7bd094dd5c94ea8191606033179d867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:33:25 compute-0 podman[291614]: 2025-12-05 10:33:25.858863581 +0000 UTC m=+2.988426430 container start b0bdd0df152721bb0057b39d711eebd7a7bd094dd5c94ea8191606033179d867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 10:33:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:33:26 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.7 total, 600.0 interval
                                           Cumulative writes: 8135 writes, 37K keys, 8133 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 8134 writes, 8132 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1423 writes, 6879 keys, 1423 commit groups, 1.0 writes per commit group, ingest: 11.36 MB, 0.02 MB/s
                                           Interval WAL: 1422 writes, 1422 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     31.8      1.79              0.30        22    0.081       0      0       0.0       0.0
                                             L6      1/0   12.98 MB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   4.7     72.9     62.6      4.28              1.08        21    0.204    124K    11K       0.0       0.0
                                            Sum      1/0   12.98 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.7     51.4     53.5      6.07              1.38        43    0.141    124K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.4     54.7     55.2      1.41              0.33        10    0.141     35K   3084       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   0.0     72.9     62.6      4.28              1.08        21    0.204    124K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     42.8      1.33              0.30        21    0.063       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.46              0.00         1    0.463       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.7 total, 600.0 interval
                                           Flush(GB): cumulative 0.056, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.32 GB write, 0.11 MB/s write, 0.30 GB read, 0.10 MB/s read, 6.1 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5585d4f19350#2 capacity: 304.00 MB usage: 30.15 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000666 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1767,29.21 MB,9.60991%) FilterBlock(44,358.17 KB,0.115058%) IndexBlock(44,598.66 KB,0.192311%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 05 10:33:26 compute-0 podman[291614]: 2025-12-05 10:33:26.691614795 +0000 UTC m=+3.821177664 container attach b0bdd0df152721bb0057b39d711eebd7a7bd094dd5c94ea8191606033179d867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_davinci, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 10:33:26 compute-0 lvm[291709]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:33:26 compute-0 lvm[291709]: VG ceph_vg0 finished
Dec 05 10:33:26 compute-0 fervent_davinci[291633]: {}
Dec 05 10:33:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:26.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:33:26 compute-0 systemd[1]: libpod-b0bdd0df152721bb0057b39d711eebd7a7bd094dd5c94ea8191606033179d867.scope: Deactivated successfully.
Dec 05 10:33:26 compute-0 podman[291614]: 2025-12-05 10:33:26.783597735 +0000 UTC m=+3.913160594 container died b0bdd0df152721bb0057b39d711eebd7a7bd094dd5c94ea8191606033179d867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_davinci, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 10:33:26 compute-0 systemd[1]: libpod-b0bdd0df152721bb0057b39d711eebd7a7bd094dd5c94ea8191606033179d867.scope: Consumed 1.567s CPU time.
Dec 05 10:33:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:27.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:27.550Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:33:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:33:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:33:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:33:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:33:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:33:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:33:27
Dec 05 10:33:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:33:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:33:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'backups', '.nfs', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes']
Dec 05 10:33:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:33:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:33:28 compute-0 nova_compute[257087]: 2025-12-05 10:33:28.567 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:33:28 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:33:28 compute-0 ceph-mon[74418]: pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9089bade6e88436f906aa58d423b1564d596b2db51b78dbe755840478afdb71-merged.mount: Deactivated successfully.
Dec 05 10:33:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:28.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:33:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:28.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:33:29 compute-0 podman[291614]: 2025-12-05 10:33:29.443491574 +0000 UTC m=+6.573054443 container remove b0bdd0df152721bb0057b39d711eebd7a7bd094dd5c94ea8191606033179d867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:33:29 compute-0 systemd[1]: libpod-conmon-b0bdd0df152721bb0057b39d711eebd7a7bd094dd5c94ea8191606033179d867.scope: Deactivated successfully.
Dec 05 10:33:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:33:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:29.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:33:29 compute-0 sudo[291506]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:29 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:33:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:30.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:31.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:32 compute-0 ceph-mon[74418]: pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:33:32 compute-0 ceph-mon[74418]: pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:33:32 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:33:32 compute-0 ceph-mon[74418]: pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:33:32 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:33:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:33:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:32.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:33.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:33 compute-0 nova_compute[257087]: 2025-12-05 10:33:33.572 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:33 compute-0 nova_compute[257087]: 2025-12-05 10:33:33.574 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:33 compute-0 nova_compute[257087]: 2025-12-05 10:33:33.574 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:33:33 compute-0 nova_compute[257087]: 2025-12-05 10:33:33.574 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:33 compute-0 nova_compute[257087]: 2025-12-05 10:33:33.640 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:33 compute-0 nova_compute[257087]: 2025-12-05 10:33:33.641 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:33.830Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:34 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:33:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:34.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:35.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:35] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:33:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:35] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:33:35 compute-0 ceph-mon[74418]: pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:35 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:35 compute-0 ceph-mon[74418]: pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:33:35 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:35 compute-0 sudo[291735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:33:35 compute-0 sudo[291735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:35 compute-0 sudo[291735]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:33:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:36.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:33:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:37.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:37.552Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:37 compute-0 ceph-mon[74418]: pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:37 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:33:37 compute-0 sudo[291762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:33:37 compute-0 sudo[291762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:37 compute-0 sudo[291762]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:38 compute-0 nova_compute[257087]: 2025-12-05 10:33:38.642 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:38 compute-0 nova_compute[257087]: 2025-12-05 10:33:38.645 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:38 compute-0 nova_compute[257087]: 2025-12-05 10:33:38.645 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:33:38 compute-0 nova_compute[257087]: 2025-12-05 10:33:38.646 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:38 compute-0 nova_compute[257087]: 2025-12-05 10:33:38.697 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:38 compute-0 nova_compute[257087]: 2025-12-05 10:33:38.698 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:38.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:38.929Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:39 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:33:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:39.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:39 compute-0 ceph-mon[74418]: pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:40.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:41 compute-0 ceph-mon[74418]: pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:41.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:42 compute-0 podman[291791]: 2025-12-05 10:33:42.413297905 +0000 UTC m=+0.069328876 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 05 10:33:42 compute-0 podman[291793]: 2025-12-05 10:33:42.417103718 +0000 UTC m=+0.071468373 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd)
Dec 05 10:33:42 compute-0 podman[291792]: 2025-12-05 10:33:42.458087012 +0000 UTC m=+0.111952084 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:33:42 compute-0 ceph-mon[74418]: pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:33:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:33:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:42.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:43.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:43 compute-0 nova_compute[257087]: 2025-12-05 10:33:43.699 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:43.831Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:33:44 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:33:44 compute-0 ceph-mon[74418]: pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:44.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:45.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:45] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:33:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:45] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:33:45 compute-0 ceph-mon[74418]: pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:46.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:47.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:47.553Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:47 compute-0 ceph-mon[74418]: pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:48 compute-0 nova_compute[257087]: 2025-12-05 10:33:48.701 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:48 compute-0 nova_compute[257087]: 2025-12-05 10:33:48.701 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:48 compute-0 nova_compute[257087]: 2025-12-05 10:33:48.702 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:33:48 compute-0 nova_compute[257087]: 2025-12-05 10:33:48.702 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:48 compute-0 nova_compute[257087]: 2025-12-05 10:33:48.702 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:48 compute-0 nova_compute[257087]: 2025-12-05 10:33:48.703 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:33:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:48.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:33:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:48.929Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:33:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:49.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:50.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:33:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:51.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:33:51 compute-0 ceph-mon[74418]: pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:33:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:33:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:52.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:52 compute-0 ceph-mon[74418]: pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:33:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:53.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:33:53 compute-0 nova_compute[257087]: 2025-12-05 10:33:53.703 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:53.833Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:33:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:33:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:33:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:54.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:54 compute-0 ceph-mon[74418]: pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:33:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:55.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:33:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:55] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:33:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:33:55] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:33:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:33:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:56.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:33:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:57.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:33:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:57.554Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:33:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:57.554Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:33:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:57.554Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:33:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:33:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:33:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:33:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:33:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:33:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:33:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:33:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:33:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:33:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:33:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:33:58 compute-0 sudo[291872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:33:58 compute-0 sudo[291872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:33:58 compute-0 sudo[291872]: pam_unix(sudo:session): session closed for user root
Dec 05 10:33:58 compute-0 nova_compute[257087]: 2025-12-05 10:33:58.706 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:58 compute-0 nova_compute[257087]: 2025-12-05 10:33:58.708 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:33:58 compute-0 nova_compute[257087]: 2025-12-05 10:33:58.708 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:33:58 compute-0 nova_compute[257087]: 2025-12-05 10:33:58.709 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:58 compute-0 nova_compute[257087]: 2025-12-05 10:33:58.748 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:33:58 compute-0 nova_compute[257087]: 2025-12-05 10:33:58.748 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:33:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:33:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:33:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:33:58.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:33:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:58.930Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:33:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:33:58.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:33:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:33:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:33:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:33:59.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:34:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:00.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:01 compute-0 ceph-mds[96460]: mds.beacon.cephfs.compute-0.hfgtsk missed beacon ack from the monitors
Dec 05 10:34:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:01.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:02.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:03.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:03 compute-0 nova_compute[257087]: 2025-12-05 10:34:03.749 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:34:03 compute-0 nova_compute[257087]: 2025-12-05 10:34:03.752 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:34:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:03.833Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:34:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:03.833Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:34:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:04.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:05 compute-0 ceph-mds[96460]: mds.beacon.cephfs.compute-0.hfgtsk missed beacon ack from the monitors
Dec 05 10:34:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:05.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:05] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:34:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:05] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:34:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).mds e10 check_health: resetting beacon timeouts due to mon delay (slow election?) of 12.4128 seconds
Dec 05 10:34:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:34:06 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 05 10:34:06 compute-0 ceph-mon[74418]: paxos.0).electionLogic(15) init, last seen epoch 15, mid-election, bumping
Dec 05 10:34:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:06.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:06 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 10:34:06 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:34:06 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:34:06 compute-0 ceph-mon[74418]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 05 10:34:07 compute-0 ceph-mon[74418]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 05 10:34:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:07.555Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:07.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:08 compute-0 ceph-mon[74418]: pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : monmap epoch 3
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : last_changed 2025-12-05T09:46:29.159401+0000
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : created 2025-12-05T09:43:16.088283+0000
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 05 10:34:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 2 up:standby
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.hvnxai(active, since 43m), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.0 observed slow operation indications in BlueStore
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec 05 10:34:08 compute-0 ceph-mon[74418]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.qiwwqr on compute-1 is in error state
Dec 05 10:34:08 compute-0 nova_compute[257087]: 2025-12-05 10:34:08.752 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:34:08 compute-0 nova_compute[257087]: 2025-12-05 10:34:08.754 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:34:08 compute-0 nova_compute[257087]: 2025-12-05 10:34:08.754 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:34:08 compute-0 nova_compute[257087]: 2025-12-05 10:34:08.755 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:34:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:08.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:08 compute-0 nova_compute[257087]: 2025-12-05 10:34:08.836 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:34:08 compute-0 nova_compute[257087]: 2025-12-05 10:34:08.838 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:34:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:08.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:09.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:10 compute-0 ceph-mon[74418]: pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:34:10 compute-0 ceph-mon[74418]: pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:34:10 compute-0 ceph-mon[74418]: pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:34:10 compute-0 ceph-mon[74418]: pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:10 compute-0 ceph-mon[74418]: pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:34:10 compute-0 ceph-mon[74418]: mon.compute-2 calling monitor election
Dec 05 10:34:10 compute-0 ceph-mon[74418]: mon.compute-0 calling monitor election
Dec 05 10:34:10 compute-0 ceph-mon[74418]: pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:10 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:34:10 compute-0 ceph-mon[74418]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 05 10:34:10 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/616165517' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:34:10 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/616165517' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:34:10 compute-0 ceph-mon[74418]: monmap epoch 3
Dec 05 10:34:10 compute-0 ceph-mon[74418]: fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5
Dec 05 10:34:10 compute-0 ceph-mon[74418]: last_changed 2025-12-05T09:46:29.159401+0000
Dec 05 10:34:10 compute-0 ceph-mon[74418]: created 2025-12-05T09:43:16.088283+0000
Dec 05 10:34:10 compute-0 ceph-mon[74418]: min_mon_release 19 (squid)
Dec 05 10:34:10 compute-0 ceph-mon[74418]: election_strategy: 1
Dec 05 10:34:10 compute-0 ceph-mon[74418]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 05 10:34:10 compute-0 ceph-mon[74418]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 05 10:34:10 compute-0 ceph-mon[74418]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 05 10:34:10 compute-0 ceph-mon[74418]: fsmap cephfs:1 {0=cephfs.compute-2.qyxerc=up:active} 2 up:standby
Dec 05 10:34:10 compute-0 ceph-mon[74418]: osdmap e139: 3 total, 3 up, 3 in
Dec 05 10:34:10 compute-0 ceph-mon[74418]: mgrmap e30: compute-0.hvnxai(active, since 43m), standbys: compute-2.wewrgp, compute-1.unhddt
Dec 05 10:34:10 compute-0 ceph-mon[74418]: Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Dec 05 10:34:10 compute-0 ceph-mon[74418]: [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Dec 05 10:34:10 compute-0 ceph-mon[74418]:      osd.0 observed slow operation indications in BlueStore
Dec 05 10:34:10 compute-0 ceph-mon[74418]:      osd.1 observed slow operation indications in BlueStore
Dec 05 10:34:10 compute-0 ceph-mon[74418]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec 05 10:34:10 compute-0 ceph-mon[74418]:     daemon nfs.cephfs.0.0.compute-1.qiwwqr on compute-1 is in error state
Dec 05 10:34:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:34:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:10.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:11.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:34:11 compute-0 ceph-mon[74418]: mon.compute-1 calling monitor election
Dec 05 10:34:11 compute-0 ceph-mon[74418]: pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:12 compute-0 nova_compute[257087]: 2025-12-05 10:34:12.131 257094 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 0.97 sec
Dec 05 10:34:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:34:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:34:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:12.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:13 compute-0 ceph-mon[74418]: pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:34:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:34:13 compute-0 podman[291913]: 2025-12-05 10:34:13.393107033 +0000 UTC m=+0.058274735 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:34:13 compute-0 podman[291915]: 2025-12-05 10:34:13.408435819 +0000 UTC m=+0.062635883 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:34:13 compute-0 podman[291914]: 2025-12-05 10:34:13.458314965 +0000 UTC m=+0.110583306 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 05 10:34:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:13.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:13.834Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:34:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:13.835Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:13 compute-0 nova_compute[257087]: 2025-12-05 10:34:13.837 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:34:13 compute-0 nova_compute[257087]: 2025-12-05 10:34:13.840 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:34:14 compute-0 ceph-mon[74418]: pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:14 compute-0 nova_compute[257087]: 2025-12-05 10:34:14.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:34:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:14.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:15 compute-0 ceph-mon[74418]: pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:15.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:15] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 05 10:34:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:15] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 05 10:34:16 compute-0 nova_compute[257087]: 2025-12-05 10:34:16.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:34:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:34:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:16.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:17 compute-0 nova_compute[257087]: 2025-12-05 10:34:17.523 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:34:17 compute-0 nova_compute[257087]: 2025-12-05 10:34:17.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:34:17 compute-0 nova_compute[257087]: 2025-12-05 10:34:17.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:34:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:17.555Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:34:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:17.556Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:34:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:17.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-crash-compute-0[79586]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec 05 10:34:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:18 compute-0 sudo[291976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:34:18 compute-0 sudo[291976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:18 compute-0 sudo[291976]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:18 compute-0 ceph-mon[74418]: pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:18 compute-0 nova_compute[257087]: 2025-12-05 10:34:18.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:34:18 compute-0 nova_compute[257087]: 2025-12-05 10:34:18.682 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:34:18 compute-0 nova_compute[257087]: 2025-12-05 10:34:18.683 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:34:18 compute-0 nova_compute[257087]: 2025-12-05 10:34:18.683 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:34:18 compute-0 nova_compute[257087]: 2025-12-05 10:34:18.683 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:34:18 compute-0 nova_compute[257087]: 2025-12-05 10:34:18.684 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:34:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:18 compute-0 nova_compute[257087]: 2025-12-05 10:34:18.839 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:34:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:34:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:18.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:34:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:18.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:34:19 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/582504322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:34:19 compute-0 nova_compute[257087]: 2025-12-05 10:34:19.186 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:34:19 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1111649149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:34:19 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/582504322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:34:19 compute-0 nova_compute[257087]: 2025-12-05 10:34:19.373 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:34:19 compute-0 nova_compute[257087]: 2025-12-05 10:34:19.375 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4516MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:34:19 compute-0 nova_compute[257087]: 2025-12-05 10:34:19.376 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:34:19 compute-0 nova_compute[257087]: 2025-12-05 10:34:19.376 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:34:19 compute-0 nova_compute[257087]: 2025-12-05 10:34:19.456 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:34:19 compute-0 nova_compute[257087]: 2025-12-05 10:34:19.457 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:34:19 compute-0 nova_compute[257087]: 2025-12-05 10:34:19.500 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:34:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:19.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:34:19 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968538865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:34:19 compute-0 nova_compute[257087]: 2025-12-05 10:34:19.965 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:34:19 compute-0 nova_compute[257087]: 2025-12-05 10:34:19.972 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:34:20 compute-0 nova_compute[257087]: 2025-12-05 10:34:20.001 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:34:20 compute-0 nova_compute[257087]: 2025-12-05 10:34:20.003 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:34:20 compute-0 nova_compute[257087]: 2025-12-05 10:34:20.003 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:34:20 compute-0 ceph-mon[74418]: pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:20 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3107460420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:34:20 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1814016658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:34:20 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3968538865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:34:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:34:20.596 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:34:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:34:20.597 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:34:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:34:20.597 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:34:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:20.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:21.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4208548885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:34:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:34:22 compute-0 nova_compute[257087]: 2025-12-05 10:34:22.000 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:34:22 compute-0 nova_compute[257087]: 2025-12-05 10:34:22.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:34:22 compute-0 nova_compute[257087]: 2025-12-05 10:34:22.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:34:22 compute-0 nova_compute[257087]: 2025-12-05 10:34:22.529 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:34:22 compute-0 nova_compute[257087]: 2025-12-05 10:34:22.796 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:34:22 compute-0 nova_compute[257087]: 2025-12-05 10:34:22.796 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:34:22 compute-0 nova_compute[257087]: 2025-12-05 10:34:22.796 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:34:22 compute-0 ceph-mon[74418]: pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:22.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:23 compute-0 nova_compute[257087]: 2025-12-05 10:34:23.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:34:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:23.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:23 compute-0 ceph-mon[74418]: pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:23.836Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:23 compute-0 nova_compute[257087]: 2025-12-05 10:34:23.841 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:34:23 compute-0 nova_compute[257087]: 2025-12-05 10:34:23.842 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:34:23 compute-0 nova_compute[257087]: 2025-12-05 10:34:23.842 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:34:23 compute-0 nova_compute[257087]: 2025-12-05 10:34:23.842 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:34:23 compute-0 nova_compute[257087]: 2025-12-05 10:34:23.843 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:34:23 compute-0 nova_compute[257087]: 2025-12-05 10:34:23.844 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:34:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:24.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:25.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:25] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 05 10:34:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:25] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 05 10:34:26 compute-0 ceph-mon[74418]: pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:26.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:26 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:34:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:27.556Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:27.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:34:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:34:27 compute-0 ceph-mon[74418]: pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:34:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:34:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:34:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:34:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:34:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:34:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:34:27
Dec 05 10:34:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:34:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:34:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'vms', 'default.rgw.log', '.nfs', '.mgr']
Dec 05 10:34:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:34:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:34:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:28 compute-0 nova_compute[257087]: 2025-12-05 10:34:28.843 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:34:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:28.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:28.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:34:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:29.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:30 compute-0 ceph-mon[74418]: pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:30.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:31.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:31 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:34:32 compute-0 ceph-mon[74418]: pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:32.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:33 compute-0 ceph-mon[74418]: pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:33.837Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:34:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:33.837Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:33 compute-0 nova_compute[257087]: 2025-12-05 10:34:33.844 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:34:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:34.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:34:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:35.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:34:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:35] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:34:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:35] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:34:35 compute-0 ceph-mon[74418]: pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:36 compute-0 sudo[292063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:34:36 compute-0 sudo[292063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:36 compute-0 sudo[292063]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:36 compute-0 sudo[292088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:34:36 compute-0 sudo[292088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:36 compute-0 sudo[292088]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:36.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:34:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:34:36 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:34:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:34:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:34:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:34:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:34:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:34:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:34:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:34:36 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:34:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:34:36 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:34:36 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:34:36 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:34:36 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:34:36 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:34:36 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:34:36 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:34:36 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:34:36 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:34:36 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:34:36 compute-0 sudo[292148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:34:36 compute-0 sudo[292148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:36 compute-0 sudo[292148]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:37 compute-0 sudo[292173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:34:37 compute-0 sudo[292173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:37 compute-0 podman[292238]: 2025-12-05 10:34:37.470435437 +0000 UTC m=+0.047041360 container create a78cd05907cb6bab987e4b0e53e5c3657edf17192e61f77820d40a6e56dd7b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:34:37 compute-0 systemd[1]: Started libpod-conmon-a78cd05907cb6bab987e4b0e53e5c3657edf17192e61f77820d40a6e56dd7b6b.scope.
Dec 05 10:34:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:34:37 compute-0 podman[292238]: 2025-12-05 10:34:37.451085541 +0000 UTC m=+0.027691484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:34:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:37.557Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:37 compute-0 podman[292238]: 2025-12-05 10:34:37.562671404 +0000 UTC m=+0.139277337 container init a78cd05907cb6bab987e4b0e53e5c3657edf17192e61f77820d40a6e56dd7b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:34:37 compute-0 podman[292238]: 2025-12-05 10:34:37.571269458 +0000 UTC m=+0.147875411 container start a78cd05907cb6bab987e4b0e53e5c3657edf17192e61f77820d40a6e56dd7b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:34:37 compute-0 podman[292238]: 2025-12-05 10:34:37.57466531 +0000 UTC m=+0.151271253 container attach a78cd05907cb6bab987e4b0e53e5c3657edf17192e61f77820d40a6e56dd7b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 10:34:37 compute-0 condescending_wiles[292255]: 167 167
Dec 05 10:34:37 compute-0 systemd[1]: libpod-a78cd05907cb6bab987e4b0e53e5c3657edf17192e61f77820d40a6e56dd7b6b.scope: Deactivated successfully.
Dec 05 10:34:37 compute-0 conmon[292255]: conmon a78cd05907cb6bab987e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a78cd05907cb6bab987e4b0e53e5c3657edf17192e61f77820d40a6e56dd7b6b.scope/container/memory.events
Dec 05 10:34:37 compute-0 podman[292238]: 2025-12-05 10:34:37.580538359 +0000 UTC m=+0.157144282 container died a78cd05907cb6bab987e4b0e53e5c3657edf17192e61f77820d40a6e56dd7b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 05 10:34:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce933b21b51157d186f935c749dbc8338f328419153da843d42a78683efc5803-merged.mount: Deactivated successfully.
Dec 05 10:34:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:37.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:37 compute-0 podman[292238]: 2025-12-05 10:34:37.61697352 +0000 UTC m=+0.193579443 container remove a78cd05907cb6bab987e4b0e53e5c3657edf17192e61f77820d40a6e56dd7b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 05 10:34:37 compute-0 systemd[1]: libpod-conmon-a78cd05907cb6bab987e4b0e53e5c3657edf17192e61f77820d40a6e56dd7b6b.scope: Deactivated successfully.
Dec 05 10:34:37 compute-0 podman[292278]: 2025-12-05 10:34:37.825880658 +0000 UTC m=+0.058430509 container create 5d43ad6befa58c7c14d9a835263f095269b46f807c5fed7fb0a49eba330457af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_greider, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:34:37 compute-0 systemd[1]: Started libpod-conmon-5d43ad6befa58c7c14d9a835263f095269b46f807c5fed7fb0a49eba330457af.scope.
Dec 05 10:34:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:34:37 compute-0 podman[292278]: 2025-12-05 10:34:37.804728783 +0000 UTC m=+0.037278674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540b5a5945ff739dd444ee710d0927cef604338c8a4f0de9aa957fc8ea46236c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540b5a5945ff739dd444ee710d0927cef604338c8a4f0de9aa957fc8ea46236c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540b5a5945ff739dd444ee710d0927cef604338c8a4f0de9aa957fc8ea46236c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540b5a5945ff739dd444ee710d0927cef604338c8a4f0de9aa957fc8ea46236c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540b5a5945ff739dd444ee710d0927cef604338c8a4f0de9aa957fc8ea46236c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:37 compute-0 podman[292278]: 2025-12-05 10:34:37.919596465 +0000 UTC m=+0.152146306 container init 5d43ad6befa58c7c14d9a835263f095269b46f807c5fed7fb0a49eba330457af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_greider, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:34:37 compute-0 podman[292278]: 2025-12-05 10:34:37.926799541 +0000 UTC m=+0.159349382 container start 5d43ad6befa58c7c14d9a835263f095269b46f807c5fed7fb0a49eba330457af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_greider, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 05 10:34:37 compute-0 podman[292278]: 2025-12-05 10:34:37.929912406 +0000 UTC m=+0.162462247 container attach 5d43ad6befa58c7c14d9a835263f095269b46f807c5fed7fb0a49eba330457af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 10:34:37 compute-0 ceph-mon[74418]: pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:37 compute-0 ceph-mon[74418]: pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:38 compute-0 sudo[292302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:34:38 compute-0 sudo[292302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:38 compute-0 sudo[292302]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:38 compute-0 elegant_greider[292295]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:34:38 compute-0 elegant_greider[292295]: --> All data devices are unavailable
Dec 05 10:34:38 compute-0 systemd[1]: libpod-5d43ad6befa58c7c14d9a835263f095269b46f807c5fed7fb0a49eba330457af.scope: Deactivated successfully.
Dec 05 10:34:38 compute-0 podman[292278]: 2025-12-05 10:34:38.319346241 +0000 UTC m=+0.551896122 container died 5d43ad6befa58c7c14d9a835263f095269b46f807c5fed7fb0a49eba330457af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 10:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-540b5a5945ff739dd444ee710d0927cef604338c8a4f0de9aa957fc8ea46236c-merged.mount: Deactivated successfully.
Dec 05 10:34:38 compute-0 podman[292278]: 2025-12-05 10:34:38.363677816 +0000 UTC m=+0.596227657 container remove 5d43ad6befa58c7c14d9a835263f095269b46f807c5fed7fb0a49eba330457af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 10:34:38 compute-0 systemd[1]: libpod-conmon-5d43ad6befa58c7c14d9a835263f095269b46f807c5fed7fb0a49eba330457af.scope: Deactivated successfully.
Dec 05 10:34:38 compute-0 sudo[292173]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:38 compute-0 sudo[292348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:34:38 compute-0 sudo[292348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:38 compute-0 sudo[292348]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:38 compute-0 sudo[292373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:34:38 compute-0 sudo[292373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:38 compute-0 nova_compute[257087]: 2025-12-05 10:34:38.847 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:34:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:38.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:38.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:38 compute-0 podman[292443]: 2025-12-05 10:34:38.972705 +0000 UTC m=+0.047286327 container create 246196eefd416abab651b9a87598d1cf22cba1278e56e67e6110923285230455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 10:34:39 compute-0 systemd[1]: Started libpod-conmon-246196eefd416abab651b9a87598d1cf22cba1278e56e67e6110923285230455.scope.
Dec 05 10:34:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:34:39 compute-0 podman[292443]: 2025-12-05 10:34:38.951696798 +0000 UTC m=+0.026278175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:34:39 compute-0 podman[292443]: 2025-12-05 10:34:39.058535143 +0000 UTC m=+0.133116480 container init 246196eefd416abab651b9a87598d1cf22cba1278e56e67e6110923285230455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 05 10:34:39 compute-0 podman[292443]: 2025-12-05 10:34:39.066402396 +0000 UTC m=+0.140983713 container start 246196eefd416abab651b9a87598d1cf22cba1278e56e67e6110923285230455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 10:34:39 compute-0 podman[292443]: 2025-12-05 10:34:39.069560893 +0000 UTC m=+0.144142220 container attach 246196eefd416abab651b9a87598d1cf22cba1278e56e67e6110923285230455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:34:39 compute-0 angry_grothendieck[292459]: 167 167
Dec 05 10:34:39 compute-0 systemd[1]: libpod-246196eefd416abab651b9a87598d1cf22cba1278e56e67e6110923285230455.scope: Deactivated successfully.
Dec 05 10:34:39 compute-0 podman[292443]: 2025-12-05 10:34:39.071439583 +0000 UTC m=+0.146020900 container died 246196eefd416abab651b9a87598d1cf22cba1278e56e67e6110923285230455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 10:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdf3068e6a0d69de5947fc27cf2eddd3b0ef45afa3a3a16fdde61a127b25eef7-merged.mount: Deactivated successfully.
Dec 05 10:34:39 compute-0 podman[292443]: 2025-12-05 10:34:39.105853158 +0000 UTC m=+0.180434485 container remove 246196eefd416abab651b9a87598d1cf22cba1278e56e67e6110923285230455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 05 10:34:39 compute-0 systemd[1]: libpod-conmon-246196eefd416abab651b9a87598d1cf22cba1278e56e67e6110923285230455.scope: Deactivated successfully.
Dec 05 10:34:39 compute-0 podman[292484]: 2025-12-05 10:34:39.273745282 +0000 UTC m=+0.044802079 container create d9077afd8a53da472cc18bfc3226c8a0897cbbd008154f97f55143cc97126cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_gagarin, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 05 10:34:39 compute-0 systemd[1]: Started libpod-conmon-d9077afd8a53da472cc18bfc3226c8a0897cbbd008154f97f55143cc97126cd9.scope.
Dec 05 10:34:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:34:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1890b0d67a618f54bae6ca1b22fd7c6921bdb37dc66b5fbec33290509eef2d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1890b0d67a618f54bae6ca1b22fd7c6921bdb37dc66b5fbec33290509eef2d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1890b0d67a618f54bae6ca1b22fd7c6921bdb37dc66b5fbec33290509eef2d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1890b0d67a618f54bae6ca1b22fd7c6921bdb37dc66b5fbec33290509eef2d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:39 compute-0 podman[292484]: 2025-12-05 10:34:39.339102739 +0000 UTC m=+0.110159566 container init d9077afd8a53da472cc18bfc3226c8a0897cbbd008154f97f55143cc97126cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:34:39 compute-0 podman[292484]: 2025-12-05 10:34:39.347131187 +0000 UTC m=+0.118187984 container start d9077afd8a53da472cc18bfc3226c8a0897cbbd008154f97f55143cc97126cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:34:39 compute-0 podman[292484]: 2025-12-05 10:34:39.25380068 +0000 UTC m=+0.024857477 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:34:39 compute-0 podman[292484]: 2025-12-05 10:34:39.350065197 +0000 UTC m=+0.121122204 container attach d9077afd8a53da472cc18bfc3226c8a0897cbbd008154f97f55143cc97126cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_gagarin, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 05 10:34:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:39.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]: {
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:     "1": [
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:         {
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             "devices": [
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "/dev/loop3"
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             ],
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             "lv_name": "ceph_lv0",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             "lv_size": "21470642176",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             "name": "ceph_lv0",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             "tags": {
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.cluster_name": "ceph",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.crush_device_class": "",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.encrypted": "0",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.osd_id": "1",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.type": "block",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.vdo": "0",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:                 "ceph.with_tpm": "0"
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             },
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             "type": "block",
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:             "vg_name": "ceph_vg0"
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:         }
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]:     ]
Dec 05 10:34:39 compute-0 sweet_gagarin[292500]: }
Dec 05 10:34:39 compute-0 systemd[1]: libpod-d9077afd8a53da472cc18bfc3226c8a0897cbbd008154f97f55143cc97126cd9.scope: Deactivated successfully.
Dec 05 10:34:39 compute-0 podman[292484]: 2025-12-05 10:34:39.649937878 +0000 UTC m=+0.420994675 container died d9077afd8a53da472cc18bfc3226c8a0897cbbd008154f97f55143cc97126cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 10:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1890b0d67a618f54bae6ca1b22fd7c6921bdb37dc66b5fbec33290509eef2d9-merged.mount: Deactivated successfully.
Dec 05 10:34:39 compute-0 podman[292484]: 2025-12-05 10:34:39.693310676 +0000 UTC m=+0.464367473 container remove d9077afd8a53da472cc18bfc3226c8a0897cbbd008154f97f55143cc97126cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:34:39 compute-0 systemd[1]: libpod-conmon-d9077afd8a53da472cc18bfc3226c8a0897cbbd008154f97f55143cc97126cd9.scope: Deactivated successfully.
Dec 05 10:34:39 compute-0 sudo[292373]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:39 compute-0 sudo[292524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:34:39 compute-0 sudo[292524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:39 compute-0 sudo[292524]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:39 compute-0 sudo[292549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:34:39 compute-0 sudo[292549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:39 compute-0 ceph-mon[74418]: pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:40 compute-0 podman[292616]: 2025-12-05 10:34:40.281470953 +0000 UTC m=+0.051540382 container create 9b1c56feafd165ce7e546bead31f612c5ed8ea7a0a55d58d9ef13791afeceb0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 10:34:40 compute-0 systemd[1]: Started libpod-conmon-9b1c56feafd165ce7e546bead31f612c5ed8ea7a0a55d58d9ef13791afeceb0e.scope.
Dec 05 10:34:40 compute-0 podman[292616]: 2025-12-05 10:34:40.257221523 +0000 UTC m=+0.027291002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:34:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:34:40 compute-0 podman[292616]: 2025-12-05 10:34:40.37591408 +0000 UTC m=+0.145983489 container init 9b1c56feafd165ce7e546bead31f612c5ed8ea7a0a55d58d9ef13791afeceb0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mendeleev, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:34:40 compute-0 podman[292616]: 2025-12-05 10:34:40.382583331 +0000 UTC m=+0.152652720 container start 9b1c56feafd165ce7e546bead31f612c5ed8ea7a0a55d58d9ef13791afeceb0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 10:34:40 compute-0 podman[292616]: 2025-12-05 10:34:40.385640334 +0000 UTC m=+0.155709723 container attach 9b1c56feafd165ce7e546bead31f612c5ed8ea7a0a55d58d9ef13791afeceb0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 10:34:40 compute-0 elastic_mendeleev[292633]: 167 167
Dec 05 10:34:40 compute-0 systemd[1]: libpod-9b1c56feafd165ce7e546bead31f612c5ed8ea7a0a55d58d9ef13791afeceb0e.scope: Deactivated successfully.
Dec 05 10:34:40 compute-0 conmon[292633]: conmon 9b1c56feafd165ce7e54 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b1c56feafd165ce7e546bead31f612c5ed8ea7a0a55d58d9ef13791afeceb0e.scope/container/memory.events
Dec 05 10:34:40 compute-0 podman[292616]: 2025-12-05 10:34:40.39025346 +0000 UTC m=+0.160322849 container died 9b1c56feafd165ce7e546bead31f612c5ed8ea7a0a55d58d9ef13791afeceb0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mendeleev, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3aa2eca46457e9d46b1f32e08fa63aff59dd2a2887f75c58cf302474df83a40-merged.mount: Deactivated successfully.
Dec 05 10:34:40 compute-0 podman[292616]: 2025-12-05 10:34:40.42447066 +0000 UTC m=+0.194540049 container remove 9b1c56feafd165ce7e546bead31f612c5ed8ea7a0a55d58d9ef13791afeceb0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mendeleev, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 10:34:40 compute-0 systemd[1]: libpod-conmon-9b1c56feafd165ce7e546bead31f612c5ed8ea7a0a55d58d9ef13791afeceb0e.scope: Deactivated successfully.
Dec 05 10:34:40 compute-0 podman[292657]: 2025-12-05 10:34:40.604808452 +0000 UTC m=+0.052990571 container create 54ca75dc3994b3470092fa1687ca61573100a10e4dd3ea7812a699b278d8c07a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tesla, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 10:34:40 compute-0 systemd[1]: Started libpod-conmon-54ca75dc3994b3470092fa1687ca61573100a10e4dd3ea7812a699b278d8c07a.scope.
Dec 05 10:34:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c525fb699b35a1a8077b183239470a42abb7bc0abec75de19481b105cc8c0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c525fb699b35a1a8077b183239470a42abb7bc0abec75de19481b105cc8c0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c525fb699b35a1a8077b183239470a42abb7bc0abec75de19481b105cc8c0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c525fb699b35a1a8077b183239470a42abb7bc0abec75de19481b105cc8c0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:34:40 compute-0 podman[292657]: 2025-12-05 10:34:40.582598828 +0000 UTC m=+0.030780997 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:34:40 compute-0 podman[292657]: 2025-12-05 10:34:40.689483983 +0000 UTC m=+0.137666102 container init 54ca75dc3994b3470092fa1687ca61573100a10e4dd3ea7812a699b278d8c07a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 10:34:40 compute-0 podman[292657]: 2025-12-05 10:34:40.697846071 +0000 UTC m=+0.146028190 container start 54ca75dc3994b3470092fa1687ca61573100a10e4dd3ea7812a699b278d8c07a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 10:34:40 compute-0 podman[292657]: 2025-12-05 10:34:40.702056495 +0000 UTC m=+0.150238614 container attach 54ca75dc3994b3470092fa1687ca61573100a10e4dd3ea7812a699b278d8c07a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tesla, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:34:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:40.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:41 compute-0 lvm[292747]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:34:41 compute-0 lvm[292747]: VG ceph_vg0 finished
Dec 05 10:34:41 compute-0 magical_tesla[292673]: {}
Dec 05 10:34:41 compute-0 systemd[1]: libpod-54ca75dc3994b3470092fa1687ca61573100a10e4dd3ea7812a699b278d8c07a.scope: Deactivated successfully.
Dec 05 10:34:41 compute-0 systemd[1]: libpod-54ca75dc3994b3470092fa1687ca61573100a10e4dd3ea7812a699b278d8c07a.scope: Consumed 1.203s CPU time.
Dec 05 10:34:41 compute-0 podman[292657]: 2025-12-05 10:34:41.452869502 +0000 UTC m=+0.901051641 container died 54ca75dc3994b3470092fa1687ca61573100a10e4dd3ea7812a699b278d8c07a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-67c525fb699b35a1a8077b183239470a42abb7bc0abec75de19481b105cc8c0f-merged.mount: Deactivated successfully.
Dec 05 10:34:41 compute-0 podman[292657]: 2025-12-05 10:34:41.494992357 +0000 UTC m=+0.943174476 container remove 54ca75dc3994b3470092fa1687ca61573100a10e4dd3ea7812a699b278d8c07a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:34:41 compute-0 systemd[1]: libpod-conmon-54ca75dc3994b3470092fa1687ca61573100a10e4dd3ea7812a699b278d8c07a.scope: Deactivated successfully.
Dec 05 10:34:41 compute-0 sudo[292549]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:34:41 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:34:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:34:41 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:34:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:41.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:41 compute-0 sudo[292762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:34:41 compute-0 sudo[292762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:41 compute-0 sudo[292762]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:34:41 compute-0 ceph-mon[74418]: pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:41 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:34:41 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:34:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:34:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:34:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:42.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:43.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:43.839Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:34:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:43.839Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:34:43 compute-0 nova_compute[257087]: 2025-12-05 10:34:43.849 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:34:43 compute-0 nova_compute[257087]: 2025-12-05 10:34:43.850 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:34:43 compute-0 nova_compute[257087]: 2025-12-05 10:34:43.851 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:34:43 compute-0 nova_compute[257087]: 2025-12-05 10:34:43.851 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:34:43 compute-0 nova_compute[257087]: 2025-12-05 10:34:43.851 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:34:43 compute-0 nova_compute[257087]: 2025-12-05 10:34:43.854 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:34:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:34:44 compute-0 podman[292789]: 2025-12-05 10:34:44.400725088 +0000 UTC m=+0.062272694 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 05 10:34:44 compute-0 podman[292791]: 2025-12-05 10:34:44.401381296 +0000 UTC m=+0.060746132 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:34:44 compute-0 podman[292790]: 2025-12-05 10:34:44.437347194 +0000 UTC m=+0.098964692 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:34:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:44.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:44 compute-0 ceph-mon[74418]: pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:45.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:45] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:34:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:45] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:34:45 compute-0 ceph-mon[74418]: pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:46.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:34:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:47.558Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:47.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:47 compute-0 ceph-mon[74418]: pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:34:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:48 compute-0 nova_compute[257087]: 2025-12-05 10:34:48.850 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:34:48 compute-0 nova_compute[257087]: 2025-12-05 10:34:48.853 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:34:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:48.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:48.936Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:49.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:50 compute-0 ceph-mon[74418]: pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:50.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:51.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:34:52 compute-0 ceph-mon[74418]: pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:52.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:34:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:53.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:34:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:53.840Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:34:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:53.840Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:34:53 compute-0 nova_compute[257087]: 2025-12-05 10:34:53.852 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:34:53 compute-0 ceph-mon[74418]: pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:54.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:34:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:55.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:34:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:55] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:34:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:34:55] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:34:55 compute-0 ceph-mon[74418]: pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:34:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:34:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.004000107s ======
Dec 05 10:34:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:56.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000107s
Dec 05 10:34:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:34:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1689377086' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:34:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:34:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1689377086' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:34:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:57.561Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:57.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:34:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:34:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:34:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:34:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:34:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:34:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:34:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:34:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:34:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:34:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:34:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:34:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:34:58 compute-0 sudo[292864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:34:58 compute-0 sudo[292864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:34:58 compute-0 sudo[292864]: pam_unix(sudo:session): session closed for user root
Dec 05 10:34:58 compute-0 ceph-mon[74418]: pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1689377086' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:34:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/1689377086' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:34:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:34:58 compute-0 nova_compute[257087]: 2025-12-05 10:34:58.855 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:34:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:34:58.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:34:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:34:58.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:34:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:34:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:34:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:34:59.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:34:59 compute-0 ceph-mon[74418]: pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:00.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:01.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:01 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:35:02 compute-0 ceph-mon[74418]: pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:02.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:03.842Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:03 compute-0 nova_compute[257087]: 2025-12-05 10:35:03.858 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:35:04 compute-0 ceph-mon[74418]: pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:04.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:05] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:35:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:05] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:35:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:06 compute-0 ceph-mon[74418]: pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:06 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:35:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:06.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:07.562Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:07.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:08 compute-0 ceph-mon[74418]: pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:08 compute-0 nova_compute[257087]: 2025-12-05 10:35:08.860 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:08 compute-0 nova_compute[257087]: 2025-12-05 10:35:08.863 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:08.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:08.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:09 compute-0 ceph-mon[74418]: pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:09.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:35:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:10.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:35:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:11.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:11 compute-0 ceph-mon[74418]: pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:11 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:35:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:35:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:35:12 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:35:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:12.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:13.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:13 compute-0 ceph-mon[74418]: pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:13.842Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:35:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:13.842Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:35:13 compute-0 nova_compute[257087]: 2025-12-05 10:35:13.864 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:35:13 compute-0 nova_compute[257087]: 2025-12-05 10:35:13.866 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:35:13 compute-0 nova_compute[257087]: 2025-12-05 10:35:13.866 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:35:13 compute-0 nova_compute[257087]: 2025-12-05 10:35:13.866 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:35:13 compute-0 nova_compute[257087]: 2025-12-05 10:35:13.894 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:13 compute-0 nova_compute[257087]: 2025-12-05 10:35:13.895 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:35:14 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:14 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:14 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:14.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:15 compute-0 podman[292907]: 2025-12-05 10:35:15.425865328 +0000 UTC m=+0.080703234 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:35:15 compute-0 podman[292909]: 2025-12-05 10:35:15.426962598 +0000 UTC m=+0.070483536 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:35:15 compute-0 podman[292908]: 2025-12-05 10:35:15.440364492 +0000 UTC m=+0.090760908 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:35:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:15] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:35:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:15] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:35:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:15.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:16 compute-0 nova_compute[257087]: 2025-12-05 10:35:16.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:35:16 compute-0 nova_compute[257087]: 2025-12-05 10:35:16.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:35:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:16 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:16 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:16 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:16.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:17 compute-0 nova_compute[257087]: 2025-12-05 10:35:17.525 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:35:17 compute-0 nova_compute[257087]: 2025-12-05 10:35:17.527 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:35:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:17.563Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:17.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:18 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:35:18 compute-0 sudo[292972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:35:18 compute-0 sudo[292972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:18 compute-0 sudo[292972]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:18 compute-0 nova_compute[257087]: 2025-12-05 10:35:18.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:35:18 compute-0 nova_compute[257087]: 2025-12-05 10:35:18.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:35:18 compute-0 ceph-mon[74418]: pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:18 compute-0 ceph-mon[74418]: pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:18 compute-0 nova_compute[257087]: 2025-12-05 10:35:18.609 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:35:18 compute-0 nova_compute[257087]: 2025-12-05 10:35:18.609 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:35:18 compute-0 nova_compute[257087]: 2025-12-05 10:35:18.609 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:35:18 compute-0 nova_compute[257087]: 2025-12-05 10:35:18.610 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:35:18 compute-0 nova_compute[257087]: 2025-12-05 10:35:18.610 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:35:18 compute-0 nova_compute[257087]: 2025-12-05 10:35:18.896 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:18 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:18 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:18 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:18.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:18.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:19 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:35:19 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1340386658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:35:19 compute-0 nova_compute[257087]: 2025-12-05 10:35:19.105 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:35:19 compute-0 nova_compute[257087]: 2025-12-05 10:35:19.291 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:35:19 compute-0 nova_compute[257087]: 2025-12-05 10:35:19.293 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4488MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:35:19 compute-0 nova_compute[257087]: 2025-12-05 10:35:19.293 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:35:19 compute-0 nova_compute[257087]: 2025-12-05 10:35:19.294 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:35:19 compute-0 ceph-mon[74418]: pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:19 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1340386658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:35:19 compute-0 nova_compute[257087]: 2025-12-05 10:35:19.603 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:35:19 compute-0 nova_compute[257087]: 2025-12-05 10:35:19.604 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:35:19 compute-0 nova_compute[257087]: 2025-12-05 10:35:19.633 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:35:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:19.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:20 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:35:20 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/871080827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:35:20 compute-0 nova_compute[257087]: 2025-12-05 10:35:20.144 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:35:20 compute-0 nova_compute[257087]: 2025-12-05 10:35:20.150 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:35:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:35:20.598 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:35:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:35:20.599 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:35:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:35:20.600 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:35:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:20 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:20 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:20 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:20.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:35:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:21.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:35:21 compute-0 nova_compute[257087]: 2025-12-05 10:35:21.924 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:35:21 compute-0 nova_compute[257087]: 2025-12-05 10:35:21.926 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:35:21 compute-0 nova_compute[257087]: 2025-12-05 10:35:21.926 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:35:22 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/871080827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:35:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:22 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:22 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:22 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:22.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:23 compute-0 ceph-mon[74418]: pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:23 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3108847521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:35:23 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1783279234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:35:23 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/727559124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:35:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:35:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:23.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:23.843Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:23 compute-0 nova_compute[257087]: 2025-12-05 10:35:23.898 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:24 compute-0 ceph-mon[74418]: pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:24 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/295980527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:35:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:24 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:24 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:24 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:24.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:25] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:35:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:25] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:35:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:25.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:26 compute-0 ceph-mon[74418]: pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:26 compute-0 nova_compute[257087]: 2025-12-05 10:35:26.927 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:35:26 compute-0 nova_compute[257087]: 2025-12-05 10:35:26.927 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:35:26 compute-0 nova_compute[257087]: 2025-12-05 10:35:26.927 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:35:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:26 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:26 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:26 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:26.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:26 compute-0 nova_compute[257087]: 2025-12-05 10:35:26.962 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:35:26 compute-0 nova_compute[257087]: 2025-12-05 10:35:26.963 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:35:26 compute-0 nova_compute[257087]: 2025-12-05 10:35:26.963 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:35:26 compute-0 nova_compute[257087]: 2025-12-05 10:35:26.963 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:35:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:27.565Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:35:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:35:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:27.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:35:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:35:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:35:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:35:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:35:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:35:27 compute-0 ceph-mon[74418]: pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:35:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:35:27
Dec 05 10:35:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:35:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:35:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.nfs', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'backups', 'default.rgw.control', '.mgr']
Dec 05 10:35:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:35:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:35:28 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:35:28 compute-0 nova_compute[257087]: 2025-12-05 10:35:28.900 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:28 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:28 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:28 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:28.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:28.942Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:29 compute-0 ceph-mon[74418]: pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:29.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:35:30 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:30 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:30 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:30.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:31.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:31 compute-0 ceph-mon[74418]: pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:35:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:35:32 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:32 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:32 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:32.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:33 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:35:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:33.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:33.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:33 compute-0 nova_compute[257087]: 2025-12-05 10:35:33.902 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:35:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:34.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:35 compute-0 ceph-mon[74418]: pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:35:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:35] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:35:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:35] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:35:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:35:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:35.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:35:36 compute-0 ceph-mon[74418]: pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:35:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:35:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:35:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:36.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:35:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:35:37 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.9 total, 600.0 interval
                                           Cumulative writes: 13K writes, 45K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 4040 syncs, 3.28 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 723 writes, 1146 keys, 723 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s
                                           Interval WAL: 723 writes, 359 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:35:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:37.566Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:37.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:38 compute-0 sudo[293062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:35:38 compute-0 sudo[293062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:38 compute-0 sudo[293062]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:38 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:35:38 compute-0 nova_compute[257087]: 2025-12-05 10:35:38.905 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:35:38 compute-0 nova_compute[257087]: 2025-12-05 10:35:38.906 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:38 compute-0 nova_compute[257087]: 2025-12-05 10:35:38.906 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:35:38 compute-0 nova_compute[257087]: 2025-12-05 10:35:38.906 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:35:38 compute-0 nova_compute[257087]: 2025-12-05 10:35:38.907 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:35:38 compute-0 nova_compute[257087]: 2025-12-05 10:35:38.909 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:35:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1400: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:35:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:38.943Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:38.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:39 compute-0 ceph-mon[74418]: pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:35:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:39.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:40 compute-0 ceph-mon[74418]: pgmap v1400: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:35:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1401: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:35:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:40.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:41.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:41 compute-0 ceph-mon[74418]: pgmap v1401: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:35:41 compute-0 sudo[293091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:35:41 compute-0 sudo[293091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:41 compute-0 sudo[293091]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:41 compute-0 sudo[293116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 05 10:35:41 compute-0 sudo[293116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:35:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:35:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1402: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:42.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:35:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:35:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:43.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:43 compute-0 podman[293215]: 2025-12-05 10:35:43.730494052 +0000 UTC m=+1.199227947 container exec 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 10:35:43 compute-0 podman[293215]: 2025-12-05 10:35:43.828459595 +0000 UTC m=+1.297193500 container exec_died 07237ca89b59fe15b2bfada00cca6291c284e983b8ab27a5b4ed6604756ce93e (image=quay.io/ceph/ceph:v19, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:35:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:43.845Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:35:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:43.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:35:43 compute-0 nova_compute[257087]: 2025-12-05 10:35:43.906 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:43 compute-0 nova_compute[257087]: 2025-12-05 10:35:43.909 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:44 compute-0 ceph-mon[74418]: pgmap v1402: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:35:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:44.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:35:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1403: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:45 compute-0 podman[293337]: 2025-12-05 10:35:45.542386051 +0000 UTC m=+0.710899224 container exec 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:35:45 compute-0 podman[293337]: 2025-12-05 10:35:45.554674585 +0000 UTC m=+0.723187758 container exec_died 76e328516dff8f41a55b6aa278f0957b1c998d62fd221756ce20c2e912067e09 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:35:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:45] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 05 10:35:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:45] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 05 10:35:45 compute-0 podman[293374]: 2025-12-05 10:35:45.693596421 +0000 UTC m=+0.075200225 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:35:45 compute-0 podman[293379]: 2025-12-05 10:35:45.713118532 +0000 UTC m=+0.085179766 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd)
Dec 05 10:35:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:45.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:45 compute-0 ceph-mon[74418]: pgmap v1403: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:45 compute-0 podman[293378]: 2025-12-05 10:35:45.749164762 +0000 UTC m=+0.126814618 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:35:45 compute-0 podman[293495]: 2025-12-05 10:35:45.953904796 +0000 UTC m=+0.063504857 container exec 861f6a1b65dda022baecf3a1d543dbc6380dd0161a45bd75168d782fe13058a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:35:45 compute-0 podman[293495]: 2025-12-05 10:35:45.967762753 +0000 UTC m=+0.077362784 container exec_died 861f6a1b65dda022baecf3a1d543dbc6380dd0161a45bd75168d782fe13058a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 05 10:35:46 compute-0 podman[293561]: 2025-12-05 10:35:46.278396897 +0000 UTC m=+0.058708917 container exec d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 10:35:46 compute-0 podman[293561]: 2025-12-05 10:35:46.317820278 +0000 UTC m=+0.098132278 container exec_died d9e8b099f4ebaee346f8061412d4a8984a673def2f27be0c01f65420d490d11b (image=quay.io/ceph/haproxy:2.3, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-haproxy-nfs-cephfs-compute-0-ijjpxl)
Dec 05 10:35:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1404: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:46.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:47.567Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:35:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:47.567Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:35:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:47.569Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:35:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:35:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:47.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:35:47 compute-0 podman[293629]: 2025-12-05 10:35:47.960327113 +0000 UTC m=+1.455159883 container exec f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, version=2.2.4, name=keepalived, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, release=1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 10:35:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:48 compute-0 ceph-mon[74418]: pgmap v1404: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:48 compute-0 podman[293629]: 2025-12-05 10:35:48.484008157 +0000 UTC m=+1.978840937 container exec_died f7b5b1b62eb2f2ef2b21d84115c241fecd367e2660057af54d112069fb98fee2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-keepalived-nfs-cephfs-compute-0-ewczkf, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.buildah.version=1.28.2, distribution-scope=public, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9)
Dec 05 10:35:48 compute-0 nova_compute[257087]: 2025-12-05 10:35:48.910 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:35:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1405: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:48.944Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:48.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:35:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:49.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:49.726722) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930949726856, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1555, "num_deletes": 257, "total_data_size": 2979285, "memory_usage": 3011472, "flush_reason": "Manual Compaction"}
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930949761156, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2902235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37168, "largest_seqno": 38721, "table_properties": {"data_size": 2894818, "index_size": 4360, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15871, "raw_average_key_size": 20, "raw_value_size": 2879792, "raw_average_value_size": 3701, "num_data_blocks": 185, "num_entries": 778, "num_filter_entries": 778, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930786, "oldest_key_time": 1764930786, "file_creation_time": 1764930949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 34569 microseconds, and 9023 cpu microseconds.
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:49.761296) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2902235 bytes OK
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:49.761323) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:49.777002) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:49.777033) EVENT_LOG_v1 {"time_micros": 1764930949777023, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:49.777058) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2972547, prev total WAL file size 2989622, number of live WAL files 2.
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:49.782707) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323537' seq:0, type:0; will stop at (end)
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2834KB)], [80(12MB)]
Dec 05 10:35:49 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930949782815, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 16517593, "oldest_snapshot_seqno": -1}
Dec 05 10:35:49 compute-0 ceph-mon[74418]: pgmap v1405: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7157 keys, 16350252 bytes, temperature: kUnknown
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930950238810, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 16350252, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16301423, "index_size": 29837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17925, "raw_key_size": 188227, "raw_average_key_size": 26, "raw_value_size": 16171563, "raw_average_value_size": 2259, "num_data_blocks": 1175, "num_entries": 7157, "num_filter_entries": 7157, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764930949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:50.239448) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 16350252 bytes
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:50.272190) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 36.2 rd, 35.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 13.0 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 7692, records dropped: 535 output_compression: NoCompression
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:50.272219) EVENT_LOG_v1 {"time_micros": 1764930950272207, "job": 46, "event": "compaction_finished", "compaction_time_micros": 456170, "compaction_time_cpu_micros": 51441, "output_level": 6, "num_output_files": 1, "total_output_size": 16350252, "num_input_records": 7692, "num_output_records": 7157, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930950273125, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930950276424, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:49.782594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:50.276606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:50.276616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:50.276618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:50.276620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:35:50 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:35:50.276622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:35:50 compute-0 podman[293694]: 2025-12-05 10:35:50.614094084 +0000 UTC m=+0.714564014 container exec a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:35:50 compute-0 podman[293694]: 2025-12-05 10:35:50.702887188 +0000 UTC m=+0.803357098 container exec_died a6bf5a7c9164ff8c7d796ddbce8ee13684bcd8fc1f13f413cae2b1b7d3070101 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:35:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1406: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:50.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:51 compute-0 podman[293769]: 2025-12-05 10:35:51.265747277 +0000 UTC m=+0.363599534 container exec 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 10:35:51 compute-0 podman[293769]: 2025-12-05 10:35:51.618845934 +0000 UTC m=+0.716698191 container exec_died 3b551885afbe379856505caa3937e6b1ace00cca963d38424c7c1ad23683b260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 05 10:35:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:51.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:52 compute-0 ceph-mon[74418]: pgmap v1406: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:35:52 compute-0 podman[293884]: 2025-12-05 10:35:52.87507669 +0000 UTC m=+0.497732830 container exec 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:35:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1407: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:52.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:35:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:53 compute-0 podman[293884]: 2025-12-05 10:35:53.090016232 +0000 UTC m=+0.712672302 container exec_died 80aa96702958ad43e0567806d54f697202c21330aa583b76c00e79d0dc023ab8 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:35:53 compute-0 sudo[293116]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:35:53 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:35:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:35:53 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:35:53 compute-0 sudo[293928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:35:53 compute-0 sudo[293928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:53 compute-0 sudo[293928]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:53 compute-0 sudo[293953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:35:53 compute-0 sudo[293953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:35:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:53.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:35:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:53.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:53 compute-0 nova_compute[257087]: 2025-12-05 10:35:53.913 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:54 compute-0 sudo[293953]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:35:54 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:35:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:35:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:35:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:35:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1408: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:35:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:35:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:35:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:35:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:35:54 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:35:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:35:54 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:35:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:35:54 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:35:54 compute-0 sudo[294011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:35:54 compute-0 sudo[294011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:54 compute-0 sudo[294011]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:54 compute-0 ceph-mon[74418]: pgmap v1407: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:35:54 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:35:54 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:35:54 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:35:54 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:35:54 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:35:54 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:35:54 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:35:54 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:35:54 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:35:54 compute-0 sudo[294036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:35:54 compute-0 sudo[294036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:54 compute-0 podman[294106]: 2025-12-05 10:35:54.669513524 +0000 UTC m=+0.046867635 container create e3c2bb85a23775690cff0171aca7165ca1a472d27a1ae34d11ac885fdab77176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hypatia, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 10:35:54 compute-0 systemd[1]: Started libpod-conmon-e3c2bb85a23775690cff0171aca7165ca1a472d27a1ae34d11ac885fdab77176.scope.
Dec 05 10:35:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:35:54 compute-0 podman[294106]: 2025-12-05 10:35:54.651050873 +0000 UTC m=+0.028404994 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:35:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:35:54 compute-0 podman[294106]: 2025-12-05 10:35:54.767711613 +0000 UTC m=+0.145065744 container init e3c2bb85a23775690cff0171aca7165ca1a472d27a1ae34d11ac885fdab77176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 05 10:35:54 compute-0 podman[294106]: 2025-12-05 10:35:54.777742936 +0000 UTC m=+0.155097077 container start e3c2bb85a23775690cff0171aca7165ca1a472d27a1ae34d11ac885fdab77176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hypatia, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:35:54 compute-0 podman[294106]: 2025-12-05 10:35:54.781650812 +0000 UTC m=+0.159004913 container attach e3c2bb85a23775690cff0171aca7165ca1a472d27a1ae34d11ac885fdab77176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hypatia, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 10:35:54 compute-0 peaceful_hypatia[294123]: 167 167
Dec 05 10:35:54 compute-0 systemd[1]: libpod-e3c2bb85a23775690cff0171aca7165ca1a472d27a1ae34d11ac885fdab77176.scope: Deactivated successfully.
Dec 05 10:35:54 compute-0 podman[294106]: 2025-12-05 10:35:54.787699696 +0000 UTC m=+0.165053847 container died e3c2bb85a23775690cff0171aca7165ca1a472d27a1ae34d11ac885fdab77176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 05 10:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2815282809eb65e9a140a8c5d0724bd9c1b70bc49f5556520c66cb0a614763a3-merged.mount: Deactivated successfully.
Dec 05 10:35:54 compute-0 podman[294106]: 2025-12-05 10:35:54.841581602 +0000 UTC m=+0.218935743 container remove e3c2bb85a23775690cff0171aca7165ca1a472d27a1ae34d11ac885fdab77176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 05 10:35:54 compute-0 systemd[1]: libpod-conmon-e3c2bb85a23775690cff0171aca7165ca1a472d27a1ae34d11ac885fdab77176.scope: Deactivated successfully.
Dec 05 10:35:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:54.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:55 compute-0 podman[294146]: 2025-12-05 10:35:55.025457509 +0000 UTC m=+0.053463854 container create a28a03a758d36d05ff557b67aef3d266da014f9d87e079e9d6afe721cb5503c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:35:55 compute-0 systemd[1]: Started libpod-conmon-a28a03a758d36d05ff557b67aef3d266da014f9d87e079e9d6afe721cb5503c4.scope.
Dec 05 10:35:55 compute-0 podman[294146]: 2025-12-05 10:35:54.998925308 +0000 UTC m=+0.026931703 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:35:55 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2f6f6a4a0d07e274a369a93b0a9e1b307c833001ccd476cb728036dfd3ee67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2f6f6a4a0d07e274a369a93b0a9e1b307c833001ccd476cb728036dfd3ee67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2f6f6a4a0d07e274a369a93b0a9e1b307c833001ccd476cb728036dfd3ee67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2f6f6a4a0d07e274a369a93b0a9e1b307c833001ccd476cb728036dfd3ee67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2f6f6a4a0d07e274a369a93b0a9e1b307c833001ccd476cb728036dfd3ee67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:35:55 compute-0 podman[294146]: 2025-12-05 10:35:55.47208084 +0000 UTC m=+0.500087285 container init a28a03a758d36d05ff557b67aef3d266da014f9d87e079e9d6afe721cb5503c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec 05 10:35:55 compute-0 ceph-mon[74418]: pgmap v1408: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:35:55 compute-0 podman[294146]: 2025-12-05 10:35:55.490393967 +0000 UTC m=+0.518400332 container start a28a03a758d36d05ff557b67aef3d266da014f9d87e079e9d6afe721cb5503c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:35:55 compute-0 podman[294146]: 2025-12-05 10:35:55.494315314 +0000 UTC m=+0.522321679 container attach a28a03a758d36d05ff557b67aef3d266da014f9d87e079e9d6afe721cb5503c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 05 10:35:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:55] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 05 10:35:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:35:55] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 05 10:35:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:55.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:55 compute-0 distracted_yalow[294162]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:35:55 compute-0 distracted_yalow[294162]: --> All data devices are unavailable
Dec 05 10:35:55 compute-0 systemd[1]: libpod-a28a03a758d36d05ff557b67aef3d266da014f9d87e079e9d6afe721cb5503c4.scope: Deactivated successfully.
Dec 05 10:35:55 compute-0 podman[294146]: 2025-12-05 10:35:55.870589551 +0000 UTC m=+0.898595906 container died a28a03a758d36d05ff557b67aef3d266da014f9d87e079e9d6afe721cb5503c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 05 10:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca2f6f6a4a0d07e274a369a93b0a9e1b307c833001ccd476cb728036dfd3ee67-merged.mount: Deactivated successfully.
Dec 05 10:35:55 compute-0 podman[294146]: 2025-12-05 10:35:55.918947725 +0000 UTC m=+0.946954070 container remove a28a03a758d36d05ff557b67aef3d266da014f9d87e079e9d6afe721cb5503c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 10:35:55 compute-0 systemd[1]: libpod-conmon-a28a03a758d36d05ff557b67aef3d266da014f9d87e079e9d6afe721cb5503c4.scope: Deactivated successfully.
Dec 05 10:35:55 compute-0 sudo[294036]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1409: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:35:56 compute-0 sudo[294190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:35:56 compute-0 sudo[294190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:56 compute-0 sudo[294190]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:56 compute-0 sudo[294215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:35:56 compute-0 sudo[294215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:56 compute-0 podman[294281]: 2025-12-05 10:35:56.521887743 +0000 UTC m=+0.023930511 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:35:56 compute-0 podman[294281]: 2025-12-05 10:35:56.781935182 +0000 UTC m=+0.283977920 container create ce8b2aa9a7931d938aecfe2e03e0b754c5eb63123d3bfb26e7c9f60fe0e67308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shannon, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:35:56 compute-0 ceph-mon[74418]: pgmap v1409: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:35:56 compute-0 systemd[1]: Started libpod-conmon-ce8b2aa9a7931d938aecfe2e03e0b754c5eb63123d3bfb26e7c9f60fe0e67308.scope.
Dec 05 10:35:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:35:56 compute-0 podman[294281]: 2025-12-05 10:35:56.863026586 +0000 UTC m=+0.365069354 container init ce8b2aa9a7931d938aecfe2e03e0b754c5eb63123d3bfb26e7c9f60fe0e67308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:35:56 compute-0 podman[294281]: 2025-12-05 10:35:56.870004066 +0000 UTC m=+0.372046804 container start ce8b2aa9a7931d938aecfe2e03e0b754c5eb63123d3bfb26e7c9f60fe0e67308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shannon, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 10:35:56 compute-0 podman[294281]: 2025-12-05 10:35:56.87310778 +0000 UTC m=+0.375150598 container attach ce8b2aa9a7931d938aecfe2e03e0b754c5eb63123d3bfb26e7c9f60fe0e67308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shannon, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:35:56 compute-0 agitated_shannon[294298]: 167 167
Dec 05 10:35:56 compute-0 systemd[1]: libpod-ce8b2aa9a7931d938aecfe2e03e0b754c5eb63123d3bfb26e7c9f60fe0e67308.scope: Deactivated successfully.
Dec 05 10:35:56 compute-0 podman[294281]: 2025-12-05 10:35:56.875031132 +0000 UTC m=+0.377073870 container died ce8b2aa9a7931d938aecfe2e03e0b754c5eb63123d3bfb26e7c9f60fe0e67308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 10:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b9c8f79bd7da09fdbe8f0bd360ff9286040e042142efefe1847bf6e813ad7f8-merged.mount: Deactivated successfully.
Dec 05 10:35:56 compute-0 podman[294281]: 2025-12-05 10:35:56.914180336 +0000 UTC m=+0.416223074 container remove ce8b2aa9a7931d938aecfe2e03e0b754c5eb63123d3bfb26e7c9f60fe0e67308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shannon, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:35:56 compute-0 systemd[1]: libpod-conmon-ce8b2aa9a7931d938aecfe2e03e0b754c5eb63123d3bfb26e7c9f60fe0e67308.scope: Deactivated successfully.
Dec 05 10:35:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:56.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:57 compute-0 podman[294321]: 2025-12-05 10:35:57.097201811 +0000 UTC m=+0.046492875 container create c3c87df441690dd33c129f9498b1c8013584ea0ff47582f7378cc85987e1dfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:35:57 compute-0 systemd[1]: Started libpod-conmon-c3c87df441690dd33c129f9498b1c8013584ea0ff47582f7378cc85987e1dfe1.scope.
Dec 05 10:35:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:35:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fda1392dbeda9dd2e962864179ca8e24db5d82bb2014eb041cf263b01de205/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:35:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fda1392dbeda9dd2e962864179ca8e24db5d82bb2014eb041cf263b01de205/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:35:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fda1392dbeda9dd2e962864179ca8e24db5d82bb2014eb041cf263b01de205/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:35:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fda1392dbeda9dd2e962864179ca8e24db5d82bb2014eb041cf263b01de205/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:35:57 compute-0 podman[294321]: 2025-12-05 10:35:57.077836005 +0000 UTC m=+0.027127069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:35:57 compute-0 podman[294321]: 2025-12-05 10:35:57.17406498 +0000 UTC m=+0.123356064 container init c3c87df441690dd33c129f9498b1c8013584ea0ff47582f7378cc85987e1dfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 10:35:57 compute-0 podman[294321]: 2025-12-05 10:35:57.181197164 +0000 UTC m=+0.130488228 container start c3c87df441690dd33c129f9498b1c8013584ea0ff47582f7378cc85987e1dfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 10:35:57 compute-0 podman[294321]: 2025-12-05 10:35:57.184268808 +0000 UTC m=+0.133559882 container attach c3c87df441690dd33c129f9498b1c8013584ea0ff47582f7378cc85987e1dfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 10:35:57 compute-0 elated_beaver[294337]: {
Dec 05 10:35:57 compute-0 elated_beaver[294337]:     "1": [
Dec 05 10:35:57 compute-0 elated_beaver[294337]:         {
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             "devices": [
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "/dev/loop3"
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             ],
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             "lv_name": "ceph_lv0",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             "lv_size": "21470642176",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             "name": "ceph_lv0",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             "tags": {
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.cluster_name": "ceph",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.crush_device_class": "",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.encrypted": "0",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.osd_id": "1",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.type": "block",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.vdo": "0",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:                 "ceph.with_tpm": "0"
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             },
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             "type": "block",
Dec 05 10:35:57 compute-0 elated_beaver[294337]:             "vg_name": "ceph_vg0"
Dec 05 10:35:57 compute-0 elated_beaver[294337]:         }
Dec 05 10:35:57 compute-0 elated_beaver[294337]:     ]
Dec 05 10:35:57 compute-0 elated_beaver[294337]: }
Dec 05 10:35:57 compute-0 systemd[1]: libpod-c3c87df441690dd33c129f9498b1c8013584ea0ff47582f7378cc85987e1dfe1.scope: Deactivated successfully.
Dec 05 10:35:57 compute-0 podman[294321]: 2025-12-05 10:35:57.490656035 +0000 UTC m=+0.439947099 container died c3c87df441690dd33c129f9498b1c8013584ea0ff47582f7378cc85987e1dfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_beaver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:35:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4fda1392dbeda9dd2e962864179ca8e24db5d82bb2014eb041cf263b01de205-merged.mount: Deactivated successfully.
Dec 05 10:35:57 compute-0 podman[294321]: 2025-12-05 10:35:57.531485345 +0000 UTC m=+0.480776409 container remove c3c87df441690dd33c129f9498b1c8013584ea0ff47582f7378cc85987e1dfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_beaver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 05 10:35:57 compute-0 systemd[1]: libpod-conmon-c3c87df441690dd33c129f9498b1c8013584ea0ff47582f7378cc85987e1dfe1.scope: Deactivated successfully.
Dec 05 10:35:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:57.569Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:57 compute-0 sudo[294215]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:57 compute-0 sudo[294357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:35:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:35:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:35:57 compute-0 sudo[294357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:57 compute-0 sudo[294357]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:35:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:35:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:35:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:35:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:35:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:35:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:57.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:57 compute-0 sudo[294382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:35:57 compute-0 sudo[294382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:35:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:35:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:35:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:35:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:35:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1410: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:35:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2949519953' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:35:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2949519953' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:35:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:35:58 compute-0 podman[294447]: 2025-12-05 10:35:58.152643218 +0000 UTC m=+0.038694381 container create 505f28c5558dcdee41cba245c2ff948549139cf19105cee2649e4bb591f11fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mahavira, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:35:58 compute-0 systemd[1]: Started libpod-conmon-505f28c5558dcdee41cba245c2ff948549139cf19105cee2649e4bb591f11fdc.scope.
Dec 05 10:35:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:35:58 compute-0 podman[294447]: 2025-12-05 10:35:58.136005697 +0000 UTC m=+0.022056880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:35:58 compute-0 sudo[294468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:35:58 compute-0 sudo[294468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:35:58 compute-0 sudo[294468]: pam_unix(sudo:session): session closed for user root
Dec 05 10:35:58 compute-0 nova_compute[257087]: 2025-12-05 10:35:58.915 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:35:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:35:58.945Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:35:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:35:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:35:58.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:35:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:35:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:35:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:35:59.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1411: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:36:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:00.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:01 compute-0 podman[294447]: 2025-12-05 10:36:01.569502322 +0000 UTC m=+3.455553505 container init 505f28c5558dcdee41cba245c2ff948549139cf19105cee2649e4bb591f11fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec 05 10:36:01 compute-0 podman[294447]: 2025-12-05 10:36:01.579379691 +0000 UTC m=+3.465430854 container start 505f28c5558dcdee41cba245c2ff948549139cf19105cee2649e4bb591f11fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mahavira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:36:01 compute-0 keen_mahavira[294464]: 167 167
Dec 05 10:36:01 compute-0 systemd[1]: libpod-505f28c5558dcdee41cba245c2ff948549139cf19105cee2649e4bb591f11fdc.scope: Deactivated successfully.
Dec 05 10:36:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:01.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1412: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:36:02 compute-0 podman[294447]: 2025-12-05 10:36:02.092142548 +0000 UTC m=+3.978193811 container attach 505f28c5558dcdee41cba245c2ff948549139cf19105cee2649e4bb591f11fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:36:02 compute-0 podman[294447]: 2025-12-05 10:36:02.093459784 +0000 UTC m=+3.979510947 container died 505f28c5558dcdee41cba245c2ff948549139cf19105cee2649e4bb591f11fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mahavira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 05 10:36:02 compute-0 ceph-mon[74418]: pgmap v1410: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:36:02 compute-0 ceph-mon[74418]: pgmap v1411: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b1ae56a23a8ce8883b1b6bc7c514fccd952bec57be96cd7773e5de3ba3b828b-merged.mount: Deactivated successfully.
Dec 05 10:36:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:02.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:03 compute-0 ceph-mon[74418]: pgmap v1412: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:36:03 compute-0 podman[294447]: 2025-12-05 10:36:03.56274066 +0000 UTC m=+5.448791833 container remove 505f28c5558dcdee41cba245c2ff948549139cf19105cee2649e4bb591f11fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:36:03 compute-0 systemd[1]: libpod-conmon-505f28c5558dcdee41cba245c2ff948549139cf19105cee2649e4bb591f11fdc.scope: Deactivated successfully.
Dec 05 10:36:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:03.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:03 compute-0 podman[294520]: 2025-12-05 10:36:03.729325957 +0000 UTC m=+0.039288578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:36:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:03.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:03 compute-0 nova_compute[257087]: 2025-12-05 10:36:03.918 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:36:03 compute-0 podman[294520]: 2025-12-05 10:36:03.984734739 +0000 UTC m=+0.294697330 container create dccf34b87698b0256d0ae2dec1e935481cbf916586d05a4efd6e89e0f1ccc800 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:36:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1413: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:36:04 compute-0 systemd[1]: Started libpod-conmon-dccf34b87698b0256d0ae2dec1e935481cbf916586d05a4efd6e89e0f1ccc800.scope.
Dec 05 10:36:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4425a50e6a2094ccbb1b7976c2d056dc9ee7349e5dc1c50ac77717de74fb6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4425a50e6a2094ccbb1b7976c2d056dc9ee7349e5dc1c50ac77717de74fb6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4425a50e6a2094ccbb1b7976c2d056dc9ee7349e5dc1c50ac77717de74fb6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4425a50e6a2094ccbb1b7976c2d056dc9ee7349e5dc1c50ac77717de74fb6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:36:04 compute-0 podman[294520]: 2025-12-05 10:36:04.891419645 +0000 UTC m=+1.201382256 container init dccf34b87698b0256d0ae2dec1e935481cbf916586d05a4efd6e89e0f1ccc800 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gagarin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:36:04 compute-0 podman[294520]: 2025-12-05 10:36:04.898291021 +0000 UTC m=+1.208253612 container start dccf34b87698b0256d0ae2dec1e935481cbf916586d05a4efd6e89e0f1ccc800 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gagarin, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 10:36:04 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:04 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:04 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:04.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:05 compute-0 ceph-mon[74418]: pgmap v1413: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:36:05 compute-0 podman[294520]: 2025-12-05 10:36:05.278454264 +0000 UTC m=+1.588416855 container attach dccf34b87698b0256d0ae2dec1e935481cbf916586d05a4efd6e89e0f1ccc800 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gagarin, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 05 10:36:05 compute-0 lvm[294612]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:36:05 compute-0 lvm[294612]: VG ceph_vg0 finished
Dec 05 10:36:05 compute-0 charming_gagarin[294538]: {}
Dec 05 10:36:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:05] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:36:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:05] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:36:05 compute-0 systemd[1]: libpod-dccf34b87698b0256d0ae2dec1e935481cbf916586d05a4efd6e89e0f1ccc800.scope: Deactivated successfully.
Dec 05 10:36:05 compute-0 systemd[1]: libpod-dccf34b87698b0256d0ae2dec1e935481cbf916586d05a4efd6e89e0f1ccc800.scope: Consumed 1.227s CPU time.
Dec 05 10:36:05 compute-0 podman[294520]: 2025-12-05 10:36:05.653687173 +0000 UTC m=+1.963649774 container died dccf34b87698b0256d0ae2dec1e935481cbf916586d05a4efd6e89e0f1ccc800 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 05 10:36:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:05.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1414: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b4425a50e6a2094ccbb1b7976c2d056dc9ee7349e5dc1c50ac77717de74fb6e-merged.mount: Deactivated successfully.
Dec 05 10:36:06 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:06 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:06 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:06.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:07 compute-0 ceph-mon[74418]: pgmap v1414: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:07 compute-0 podman[294520]: 2025-12-05 10:36:07.277622953 +0000 UTC m=+3.587585554 container remove dccf34b87698b0256d0ae2dec1e935481cbf916586d05a4efd6e89e0f1ccc800 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:36:07 compute-0 systemd[1]: libpod-conmon-dccf34b87698b0256d0ae2dec1e935481cbf916586d05a4efd6e89e0f1ccc800.scope: Deactivated successfully.
Dec 05 10:36:07 compute-0 sudo[294382]: pam_unix(sudo:session): session closed for user root
Dec 05 10:36:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:36:07 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:36:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:36:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:07.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:07.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1415: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:36:08 compute-0 sudo[294631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:36:08 compute-0 sudo[294631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:36:08 compute-0 sudo[294631]: pam_unix(sudo:session): session closed for user root
Dec 05 10:36:08 compute-0 nova_compute[257087]: 2025-12-05 10:36:08.921 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:36:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:08.946Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:08 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:08 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:08 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:08.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:09 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:36:09 compute-0 ceph-mon[74418]: pgmap v1415: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:09 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:36:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:09.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:10 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1416: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:10 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:10 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:10 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:10.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:11 compute-0 ceph-mon[74418]: pgmap v1416: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:11.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:12 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1417: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:36:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:36:12 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:12 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:36:12 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:12.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:36:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:13 compute-0 ceph-mon[74418]: pgmap v1417: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:36:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:13.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:13.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:13 compute-0 nova_compute[257087]: 2025-12-05 10:36:13.923 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:36:14 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1418: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:14.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:15 compute-0 ceph-mon[74418]: pgmap v1418: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:15] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:36:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:15] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:36:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:15.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1419: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:16 compute-0 podman[294664]: 2025-12-05 10:36:16.392460772 +0000 UTC m=+0.053383642 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 10:36:16 compute-0 podman[294666]: 2025-12-05 10:36:16.404088139 +0000 UTC m=+0.063796285 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 05 10:36:16 compute-0 podman[294665]: 2025-12-05 10:36:16.42881983 +0000 UTC m=+0.089845702 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 05 10:36:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:17.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:17 compute-0 ceph-mon[74418]: pgmap v1419: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:17 compute-0 nova_compute[257087]: 2025-12-05 10:36:17.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:36:17 compute-0 nova_compute[257087]: 2025-12-05 10:36:17.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:36:17 compute-0 nova_compute[257087]: 2025-12-05 10:36:17.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:36:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:17.574Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:17.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1420: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:18 compute-0 nova_compute[257087]: 2025-12-05 10:36:18.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:36:18 compute-0 sudo[294730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:36:18 compute-0 sudo[294730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:36:18 compute-0 sudo[294730]: pam_unix(sudo:session): session closed for user root
Dec 05 10:36:18 compute-0 nova_compute[257087]: 2025-12-05 10:36:18.925 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:36:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:18.947Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:36:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:18.947Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:36:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:18.947Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:36:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:19.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:19 compute-0 nova_compute[257087]: 2025-12-05 10:36:19.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:36:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:36:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:19.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:36:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1421: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:20 compute-0 nova_compute[257087]: 2025-12-05 10:36:20.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:36:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:36:20.599 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:36:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:36:20.600 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:36:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:36:20.600 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:36:20 compute-0 nova_compute[257087]: 2025-12-05 10:36:20.866 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:36:20 compute-0 nova_compute[257087]: 2025-12-05 10:36:20.867 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:36:20 compute-0 nova_compute[257087]: 2025-12-05 10:36:20.867 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:36:20 compute-0 nova_compute[257087]: 2025-12-05 10:36:20.867 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:36:20 compute-0 nova_compute[257087]: 2025-12-05 10:36:20.868 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:36:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:21.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:21 compute-0 ceph-mon[74418]: pgmap v1420: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:21 compute-0 ceph-mon[74418]: pgmap v1421: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:36:21 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1119570645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:36:21 compute-0 nova_compute[257087]: 2025-12-05 10:36:21.347 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:36:21 compute-0 nova_compute[257087]: 2025-12-05 10:36:21.533 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:36:21 compute-0 nova_compute[257087]: 2025-12-05 10:36:21.535 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4461MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:36:21 compute-0 nova_compute[257087]: 2025-12-05 10:36:21.535 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:36:21 compute-0 nova_compute[257087]: 2025-12-05 10:36:21.536 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:36:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:21.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1422: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:22 compute-0 nova_compute[257087]: 2025-12-05 10:36:22.535 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:36:22 compute-0 nova_compute[257087]: 2025-12-05 10:36:22.536 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:36:22 compute-0 nova_compute[257087]: 2025-12-05 10:36:22.562 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:36:22 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1119570645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:36:22 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3973178311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:36:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:36:22 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305466474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:36:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:23.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:23 compute-0 nova_compute[257087]: 2025-12-05 10:36:23.021 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:36:23 compute-0 nova_compute[257087]: 2025-12-05 10:36:23.028 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:36:23 compute-0 nova_compute[257087]: 2025-12-05 10:36:23.626 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:36:23 compute-0 nova_compute[257087]: 2025-12-05 10:36:23.628 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:36:23 compute-0 nova_compute[257087]: 2025-12-05 10:36:23.629 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:36:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:23.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:23.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:23 compute-0 nova_compute[257087]: 2025-12-05 10:36:23.927 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:36:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1423: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:25.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:25] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:36:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:25] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:36:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:25.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1424: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:26 compute-0 ceph-mon[74418]: pgmap v1422: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:26 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3241793897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:36:26 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3305466474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:36:26 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1568051462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:36:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:27.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:27.575Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:27 compute-0 nova_compute[257087]: 2025-12-05 10:36:27.629 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:36:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:36:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:36:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:36:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:36:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:36:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:36:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:36:27
Dec 05 10:36:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:36:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:36:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.log', 'images', 'backups', 'volumes', '.nfs', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'vms']
Dec 05 10:36:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:36:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:27.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:36:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:36:27 compute-0 ceph-mon[74418]: pgmap v1423: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:27 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1877054702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:36:27 compute-0 ceph-mon[74418]: pgmap v1424: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:27 compute-0 nova_compute[257087]: 2025-12-05 10:36:27.856 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:36:27 compute-0 nova_compute[257087]: 2025-12-05 10:36:27.856 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:36:27 compute-0 nova_compute[257087]: 2025-12-05 10:36:27.857 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:36:27 compute-0 nova_compute[257087]: 2025-12-05 10:36:27.917 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:36:27 compute-0 nova_compute[257087]: 2025-12-05 10:36:27.917 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:36:27 compute-0 nova_compute[257087]: 2025-12-05 10:36:27.917 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:36:27 compute-0 nova_compute[257087]: 2025-12-05 10:36:27.917 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:36:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1425: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:36:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:36:28 compute-0 nova_compute[257087]: 2025-12-05 10:36:28.932 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:36:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:28.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:36:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:28.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:36:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:28.949Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:29.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:29 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:36:29 compute-0 ceph-mon[74418]: pgmap v1425: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:29.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1426: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:31.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:31 compute-0 ceph-mon[74418]: pgmap v1426: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:31.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1427: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:33.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:33 compute-0 ceph-mon[74418]: pgmap v1427: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:33.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:33.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:33 compute-0 nova_compute[257087]: 2025-12-05 10:36:33.932 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:36:33 compute-0 nova_compute[257087]: 2025-12-05 10:36:33.935 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:36:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1428: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:35.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:35 compute-0 ceph-mon[74418]: pgmap v1428: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:35] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:36:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:35] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:36:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:35.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1429: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:37.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:37 compute-0 ceph-mon[74418]: pgmap v1429: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:37.576Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:37.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1430: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:38 compute-0 ceph-mon[74418]: pgmap v1430: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:38 compute-0 sudo[294819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:36:38 compute-0 sudo[294819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:36:38 compute-0 sudo[294819]: pam_unix(sudo:session): session closed for user root
Dec 05 10:36:38 compute-0 nova_compute[257087]: 2025-12-05 10:36:38.934 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:36:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:38.950Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:39.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:39.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1431: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:41.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:41 compute-0 ceph-mon[74418]: pgmap v1431: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:41.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1432: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:36:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:36:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:43.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:43 compute-0 ceph-mon[74418]: pgmap v1432: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:36:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:43.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:43.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:43 compute-0 nova_compute[257087]: 2025-12-05 10:36:43.937 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:36:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1433: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:45.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:45 compute-0 ceph-mon[74418]: pgmap v1433: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:45] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:36:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:45] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:36:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:45.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1434: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:47.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:47 compute-0 ceph-mon[74418]: pgmap v1434: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:47 compute-0 podman[294852]: 2025-12-05 10:36:47.392932734 +0000 UTC m=+0.056348482 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:36:47 compute-0 podman[294854]: 2025-12-05 10:36:47.399660217 +0000 UTC m=+0.059185330 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:36:47 compute-0 podman[294853]: 2025-12-05 10:36:47.418416997 +0000 UTC m=+0.080518480 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:36:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:47.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:47.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1435: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:48 compute-0 nova_compute[257087]: 2025-12-05 10:36:48.939 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:36:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:48.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:49.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:49 compute-0 ceph-mon[74418]: pgmap v1435: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:49.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1436: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:50 compute-0 ceph-mon[74418]: pgmap v1436: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:51.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:36:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:51.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:36:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1437: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:53.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:53 compute-0 ceph-mon[74418]: pgmap v1437: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:36:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:53.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:36:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:53.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:53 compute-0 nova_compute[257087]: 2025-12-05 10:36:53.941 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:36:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1438: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:55.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:55 compute-0 ceph-mon[74418]: pgmap v1438: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:55] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:36:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:36:55] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:36:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:55.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1439: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:36:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803311111' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:36:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:36:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803311111' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:36:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:57.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:57 compute-0 ceph-mon[74418]: pgmap v1439: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:36:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3803311111' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:36:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3803311111' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:36:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:36:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:57.580Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:36:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:36:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:36:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:36:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:36:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:36:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:36:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:36:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:57.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:36:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:36:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:36:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:36:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:36:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:36:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1440: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:36:58 compute-0 sudo[294925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:36:58 compute-0 sudo[294925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:36:58 compute-0 sudo[294925]: pam_unix(sudo:session): session closed for user root
Dec 05 10:36:58 compute-0 nova_compute[257087]: 2025-12-05 10:36:58.943 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:36:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:36:58.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:36:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:36:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:36:59.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:36:59 compute-0 ceph-mon[74418]: pgmap v1440: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:36:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:36:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:36:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:36:59.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1441: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:37:00 compute-0 ceph-mon[74418]: pgmap v1441: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:37:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:37:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:01.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:37:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:01.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1442: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:03.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:03 compute-0 ceph-mon[74418]: pgmap v1442: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:03.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:03.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:03 compute-0 nova_compute[257087]: 2025-12-05 10:37:03.946 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:04 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1443: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:05.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:05 compute-0 ceph-mon[74418]: pgmap v1443: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:05] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:37:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:05] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:37:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:37:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:05.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:37:06 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1444: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:37:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:07.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:07.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:07.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:07.875135) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931027875196, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 883, "num_deletes": 251, "total_data_size": 1550591, "memory_usage": 1575920, "flush_reason": "Manual Compaction"}
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Dec 05 10:37:07 compute-0 ceph-mon[74418]: pgmap v1444: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931027893672, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1503294, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38722, "largest_seqno": 39604, "table_properties": {"data_size": 1498778, "index_size": 2168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9914, "raw_average_key_size": 19, "raw_value_size": 1489746, "raw_average_value_size": 2979, "num_data_blocks": 93, "num_entries": 500, "num_filter_entries": 500, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930949, "oldest_key_time": 1764930949, "file_creation_time": 1764931027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 18608 microseconds, and 6148 cpu microseconds.
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:07.893745) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1503294 bytes OK
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:07.893783) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:07.897550) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:07.897587) EVENT_LOG_v1 {"time_micros": 1764931027897577, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:07.897610) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1546355, prev total WAL file size 1546355, number of live WAL files 2.
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:07.898747) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1468KB)], [83(15MB)]
Dec 05 10:37:07 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931027898787, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 17853546, "oldest_snapshot_seqno": -1}
Dec 05 10:37:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7141 keys, 15492150 bytes, temperature: kUnknown
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931028058145, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 15492150, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15444618, "index_size": 28604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 188629, "raw_average_key_size": 26, "raw_value_size": 15316056, "raw_average_value_size": 2144, "num_data_blocks": 1113, "num_entries": 7141, "num_filter_entries": 7141, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764931027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:08.058525) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 15492150 bytes
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:08.060114) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.9 rd, 97.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 15.6 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(22.2) write-amplify(10.3) OK, records in: 7657, records dropped: 516 output_compression: NoCompression
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:08.060165) EVENT_LOG_v1 {"time_micros": 1764931028060146, "job": 48, "event": "compaction_finished", "compaction_time_micros": 159504, "compaction_time_cpu_micros": 50121, "output_level": 6, "num_output_files": 1, "total_output_size": 15492150, "num_input_records": 7657, "num_output_records": 7141, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931028060673, "job": 48, "event": "table_file_deletion", "file_number": 85}
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931028063400, "job": 48, "event": "table_file_deletion", "file_number": 83}
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:07.898472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:08.063499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:08.063504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:08.063507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:08.063509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:08 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:08.063511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:08 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1445: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:08 compute-0 sudo[294958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:37:08 compute-0 sudo[294958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:08 compute-0 sudo[294958]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:08 compute-0 sudo[294983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 05 10:37:08 compute-0 sudo[294983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 05 10:37:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 05 10:37:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:08 compute-0 sudo[294983]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:08 compute-0 ceph-mon[74418]: pgmap v1445: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:08 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:37:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:08 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:37:08 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:08 compute-0 nova_compute[257087]: 2025-12-05 10:37:08.948 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:37:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:08.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:37:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:08.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:37:08 compute-0 sudo[295030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:37:08 compute-0 sudo[295030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:08 compute-0 sudo[295030]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:09.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:09 compute-0 sudo[295055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:37:09 compute-0 sudo[295055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:09 compute-0 sudo[295055]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:37:09 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:37:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:37:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:37:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1446: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:37:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:37:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:37:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:37:09 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:37:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:37:09 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:37:09 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:37:09 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:37:09 compute-0 sudo[295112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:37:09 compute-0 sudo[295112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:09 compute-0 sudo[295112]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:09 compute-0 sudo[295137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:37:09 compute-0 sudo[295137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:09 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:09 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:09 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:37:09 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:37:09 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:09 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:09 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:37:09 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:37:09 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:37:10 compute-0 podman[295206]: 2025-12-05 10:37:10.265695735 +0000 UTC m=+0.046240217 container create f2f9b27d5a949c38a99f8bf1acc819d34a40e2c16d0d527bdca14f838ac1028a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_rosalind, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:37:10 compute-0 systemd[1]: Started libpod-conmon-f2f9b27d5a949c38a99f8bf1acc819d34a40e2c16d0d527bdca14f838ac1028a.scope.
Dec 05 10:37:10 compute-0 podman[295206]: 2025-12-05 10:37:10.244932121 +0000 UTC m=+0.025476633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:37:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:37:10 compute-0 podman[295206]: 2025-12-05 10:37:10.376470136 +0000 UTC m=+0.157014638 container init f2f9b27d5a949c38a99f8bf1acc819d34a40e2c16d0d527bdca14f838ac1028a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_rosalind, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 05 10:37:10 compute-0 podman[295206]: 2025-12-05 10:37:10.384638859 +0000 UTC m=+0.165183341 container start f2f9b27d5a949c38a99f8bf1acc819d34a40e2c16d0d527bdca14f838ac1028a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_rosalind, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 05 10:37:10 compute-0 podman[295206]: 2025-12-05 10:37:10.389175101 +0000 UTC m=+0.169719583 container attach f2f9b27d5a949c38a99f8bf1acc819d34a40e2c16d0d527bdca14f838ac1028a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_rosalind, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:37:10 compute-0 naughty_rosalind[295223]: 167 167
Dec 05 10:37:10 compute-0 systemd[1]: libpod-f2f9b27d5a949c38a99f8bf1acc819d34a40e2c16d0d527bdca14f838ac1028a.scope: Deactivated successfully.
Dec 05 10:37:10 compute-0 conmon[295223]: conmon f2f9b27d5a949c38a99f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2f9b27d5a949c38a99f8bf1acc819d34a40e2c16d0d527bdca14f838ac1028a.scope/container/memory.events
Dec 05 10:37:10 compute-0 podman[295206]: 2025-12-05 10:37:10.393591332 +0000 UTC m=+0.174135814 container died f2f9b27d5a949c38a99f8bf1acc819d34a40e2c16d0d527bdca14f838ac1028a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b4d0c61570ef95f962bb2eb2099e84e6a8edced6175b1fc9d148d0d641bdb33-merged.mount: Deactivated successfully.
Dec 05 10:37:10 compute-0 podman[295206]: 2025-12-05 10:37:10.440822206 +0000 UTC m=+0.221366688 container remove f2f9b27d5a949c38a99f8bf1acc819d34a40e2c16d0d527bdca14f838ac1028a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 05 10:37:10 compute-0 systemd[1]: libpod-conmon-f2f9b27d5a949c38a99f8bf1acc819d34a40e2c16d0d527bdca14f838ac1028a.scope: Deactivated successfully.
Dec 05 10:37:10 compute-0 podman[295250]: 2025-12-05 10:37:10.638814948 +0000 UTC m=+0.059683174 container create c3178c9db6827966e0544eafd313d23880f44f79c28df4687443c44d46bc02f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_payne, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 10:37:10 compute-0 systemd[1]: Started libpod-conmon-c3178c9db6827966e0544eafd313d23880f44f79c28df4687443c44d46bc02f5.scope.
Dec 05 10:37:10 compute-0 podman[295250]: 2025-12-05 10:37:10.613916551 +0000 UTC m=+0.034784817 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:37:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59709564fae3894e88eaa642ed6fb91acd5a30d118b143613bc52f3ea029fd07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59709564fae3894e88eaa642ed6fb91acd5a30d118b143613bc52f3ea029fd07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59709564fae3894e88eaa642ed6fb91acd5a30d118b143613bc52f3ea029fd07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59709564fae3894e88eaa642ed6fb91acd5a30d118b143613bc52f3ea029fd07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59709564fae3894e88eaa642ed6fb91acd5a30d118b143613bc52f3ea029fd07/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:10 compute-0 podman[295250]: 2025-12-05 10:37:10.737107629 +0000 UTC m=+0.157975895 container init c3178c9db6827966e0544eafd313d23880f44f79c28df4687443c44d46bc02f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_payne, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 10:37:10 compute-0 podman[295250]: 2025-12-05 10:37:10.747766789 +0000 UTC m=+0.168635005 container start c3178c9db6827966e0544eafd313d23880f44f79c28df4687443c44d46bc02f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_payne, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:37:10 compute-0 podman[295250]: 2025-12-05 10:37:10.751699906 +0000 UTC m=+0.172568172 container attach c3178c9db6827966e0544eafd313d23880f44f79c28df4687443c44d46bc02f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_payne, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:37:10 compute-0 ceph-mon[74418]: pgmap v1446: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:37:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:11.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:11 compute-0 beautiful_payne[295267]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:37:11 compute-0 beautiful_payne[295267]: --> All data devices are unavailable
Dec 05 10:37:11 compute-0 systemd[1]: libpod-c3178c9db6827966e0544eafd313d23880f44f79c28df4687443c44d46bc02f5.scope: Deactivated successfully.
Dec 05 10:37:11 compute-0 podman[295250]: 2025-12-05 10:37:11.161769621 +0000 UTC m=+0.582637847 container died c3178c9db6827966e0544eafd313d23880f44f79c28df4687443c44d46bc02f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-59709564fae3894e88eaa642ed6fb91acd5a30d118b143613bc52f3ea029fd07-merged.mount: Deactivated successfully.
Dec 05 10:37:11 compute-0 podman[295250]: 2025-12-05 10:37:11.218321238 +0000 UTC m=+0.639189464 container remove c3178c9db6827966e0544eafd313d23880f44f79c28df4687443c44d46bc02f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_payne, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:37:11 compute-0 systemd[1]: libpod-conmon-c3178c9db6827966e0544eafd313d23880f44f79c28df4687443c44d46bc02f5.scope: Deactivated successfully.
Dec 05 10:37:11 compute-0 sudo[295137]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:11 compute-0 sudo[295295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:37:11 compute-0 sudo[295295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:11 compute-0 sudo[295295]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:11 compute-0 sudo[295320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:37:11 compute-0 sudo[295320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1447: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:37:11 compute-0 podman[295384]: 2025-12-05 10:37:11.857825361 +0000 UTC m=+0.049293021 container create 78e2becaef5efcd4fdd616f2207dc3da8ff1a0a3335876945da3c192b3dd7d76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 10:37:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:11.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:11 compute-0 systemd[1]: Started libpod-conmon-78e2becaef5efcd4fdd616f2207dc3da8ff1a0a3335876945da3c192b3dd7d76.scope.
Dec 05 10:37:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:37:11 compute-0 podman[295384]: 2025-12-05 10:37:11.836915532 +0000 UTC m=+0.028383222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:37:11 compute-0 podman[295384]: 2025-12-05 10:37:11.941901016 +0000 UTC m=+0.133368696 container init 78e2becaef5efcd4fdd616f2207dc3da8ff1a0a3335876945da3c192b3dd7d76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_maxwell, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 05 10:37:11 compute-0 podman[295384]: 2025-12-05 10:37:11.949343138 +0000 UTC m=+0.140810848 container start 78e2becaef5efcd4fdd616f2207dc3da8ff1a0a3335876945da3c192b3dd7d76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_maxwell, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 05 10:37:11 compute-0 podman[295384]: 2025-12-05 10:37:11.954261002 +0000 UTC m=+0.145728702 container attach 78e2becaef5efcd4fdd616f2207dc3da8ff1a0a3335876945da3c192b3dd7d76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_maxwell, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:37:11 compute-0 suspicious_maxwell[295401]: 167 167
Dec 05 10:37:11 compute-0 systemd[1]: libpod-78e2becaef5efcd4fdd616f2207dc3da8ff1a0a3335876945da3c192b3dd7d76.scope: Deactivated successfully.
Dec 05 10:37:11 compute-0 podman[295384]: 2025-12-05 10:37:11.96007998 +0000 UTC m=+0.151547680 container died 78e2becaef5efcd4fdd616f2207dc3da8ff1a0a3335876945da3c192b3dd7d76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_maxwell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 10:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cd4eb5a46a99e134e4013954dfca696103fedc558cf7bfa84e14eba216395ff-merged.mount: Deactivated successfully.
Dec 05 10:37:12 compute-0 podman[295384]: 2025-12-05 10:37:12.00383233 +0000 UTC m=+0.195300000 container remove 78e2becaef5efcd4fdd616f2207dc3da8ff1a0a3335876945da3c192b3dd7d76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Dec 05 10:37:12 compute-0 systemd[1]: libpod-conmon-78e2becaef5efcd4fdd616f2207dc3da8ff1a0a3335876945da3c192b3dd7d76.scope: Deactivated successfully.
Dec 05 10:37:12 compute-0 podman[295423]: 2025-12-05 10:37:12.189464805 +0000 UTC m=+0.046722871 container create f5c2afe8f577409ebfd6465a3dfa317942d40c402792241f1f3eb11a8a9e5a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chaum, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:37:12 compute-0 systemd[1]: Started libpod-conmon-f5c2afe8f577409ebfd6465a3dfa317942d40c402792241f1f3eb11a8a9e5a35.scope.
Dec 05 10:37:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:37:12 compute-0 podman[295423]: 2025-12-05 10:37:12.169175124 +0000 UTC m=+0.026433210 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f5a6221877dd8578297e76024a3c57d397373f291c6e9b0170a191108d332d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f5a6221877dd8578297e76024a3c57d397373f291c6e9b0170a191108d332d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f5a6221877dd8578297e76024a3c57d397373f291c6e9b0170a191108d332d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f5a6221877dd8578297e76024a3c57d397373f291c6e9b0170a191108d332d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:12 compute-0 podman[295423]: 2025-12-05 10:37:12.281421605 +0000 UTC m=+0.138679681 container init f5c2afe8f577409ebfd6465a3dfa317942d40c402792241f1f3eb11a8a9e5a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chaum, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:37:12 compute-0 podman[295423]: 2025-12-05 10:37:12.29341586 +0000 UTC m=+0.150673926 container start f5c2afe8f577409ebfd6465a3dfa317942d40c402792241f1f3eb11a8a9e5a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:37:12 compute-0 podman[295423]: 2025-12-05 10:37:12.297768709 +0000 UTC m=+0.155026775 container attach f5c2afe8f577409ebfd6465a3dfa317942d40c402792241f1f3eb11a8a9e5a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chaum, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:37:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.389924) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931032389973, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 353, "num_deletes": 253, "total_data_size": 261696, "memory_usage": 269120, "flush_reason": "Manual Compaction"}
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931032394888, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 259203, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39605, "largest_seqno": 39957, "table_properties": {"data_size": 256925, "index_size": 442, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6406, "raw_average_key_size": 20, "raw_value_size": 252235, "raw_average_value_size": 824, "num_data_blocks": 17, "num_entries": 306, "num_filter_entries": 306, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764931028, "oldest_key_time": 1764931028, "file_creation_time": 1764931032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 5008 microseconds, and 2097 cpu microseconds.
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.394937) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 259203 bytes OK
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.394960) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.396679) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.396747) EVENT_LOG_v1 {"time_micros": 1764931032396734, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.396785) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 259296, prev total WAL file size 259296, number of live WAL files 2.
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.397510) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353037' seq:0, type:0; will stop at (end)
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(253KB)], [86(14MB)]
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931032397588, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 15751353, "oldest_snapshot_seqno": -1}
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6926 keys, 11528525 bytes, temperature: kUnknown
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931032524509, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 11528525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11487470, "index_size": 22639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 184353, "raw_average_key_size": 26, "raw_value_size": 11367673, "raw_average_value_size": 1641, "num_data_blocks": 874, "num_entries": 6926, "num_filter_entries": 6926, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764927800, "oldest_key_time": 0, "file_creation_time": 1764931032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c84246f-bc02-4e85-8436-bed956adac07", "db_session_id": "IJYRF1EZAD763P730E19", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.524852) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11528525 bytes
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.530215) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.0 rd, 90.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 14.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(105.2) write-amplify(44.5) OK, records in: 7447, records dropped: 521 output_compression: NoCompression
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.530391) EVENT_LOG_v1 {"time_micros": 1764931032530344, "job": 50, "event": "compaction_finished", "compaction_time_micros": 127019, "compaction_time_cpu_micros": 28285, "output_level": 6, "num_output_files": 1, "total_output_size": 11528525, "num_input_records": 7447, "num_output_records": 6926, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931032530919, "job": 50, "event": "table_file_deletion", "file_number": 88}
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764931032533378, "job": 50, "event": "table_file_deletion", "file_number": 86}
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.397391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.533541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.533553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.533557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.533560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:12 compute-0 ceph-mon[74418]: rocksdb: (Original Log Time 2025/12/05-10:37:12.533563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]: {
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:     "1": [
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:         {
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             "devices": [
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "/dev/loop3"
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             ],
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             "lv_name": "ceph_lv0",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             "lv_size": "21470642176",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             "name": "ceph_lv0",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             "tags": {
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.cluster_name": "ceph",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.crush_device_class": "",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.encrypted": "0",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.osd_id": "1",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.type": "block",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.vdo": "0",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:                 "ceph.with_tpm": "0"
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             },
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             "type": "block",
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:             "vg_name": "ceph_vg0"
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:         }
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]:     ]
Dec 05 10:37:12 compute-0 intelligent_chaum[295439]: }
Dec 05 10:37:12 compute-0 systemd[1]: libpod-f5c2afe8f577409ebfd6465a3dfa317942d40c402792241f1f3eb11a8a9e5a35.scope: Deactivated successfully.
Dec 05 10:37:12 compute-0 podman[295423]: 2025-12-05 10:37:12.624770938 +0000 UTC m=+0.482029004 container died f5c2afe8f577409ebfd6465a3dfa317942d40c402792241f1f3eb11a8a9e5a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:37:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:37:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:37:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f5a6221877dd8578297e76024a3c57d397373f291c6e9b0170a191108d332d9-merged.mount: Deactivated successfully.
Dec 05 10:37:12 compute-0 podman[295423]: 2025-12-05 10:37:12.674699894 +0000 UTC m=+0.531957950 container remove f5c2afe8f577409ebfd6465a3dfa317942d40c402792241f1f3eb11a8a9e5a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chaum, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 05 10:37:12 compute-0 systemd[1]: libpod-conmon-f5c2afe8f577409ebfd6465a3dfa317942d40c402792241f1f3eb11a8a9e5a35.scope: Deactivated successfully.
Dec 05 10:37:12 compute-0 sudo[295320]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:12 compute-0 sudo[295462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:37:12 compute-0 sudo[295462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:12 compute-0 sudo[295462]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:12 compute-0 sudo[295487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:37:12 compute-0 sudo[295487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:13.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:13 compute-0 podman[295552]: 2025-12-05 10:37:13.334156209 +0000 UTC m=+0.043293038 container create e86a4f4221000e0187a703d3f371ae3d9e831e4044791035656e7bfed4268bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gates, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:37:13 compute-0 systemd[1]: Started libpod-conmon-e86a4f4221000e0187a703d3f371ae3d9e831e4044791035656e7bfed4268bb8.scope.
Dec 05 10:37:13 compute-0 ceph-mon[74418]: pgmap v1447: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:37:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:37:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:37:13 compute-0 podman[295552]: 2025-12-05 10:37:13.315189283 +0000 UTC m=+0.024326142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:37:13 compute-0 podman[295552]: 2025-12-05 10:37:13.42287034 +0000 UTC m=+0.132007189 container init e86a4f4221000e0187a703d3f371ae3d9e831e4044791035656e7bfed4268bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 05 10:37:13 compute-0 podman[295552]: 2025-12-05 10:37:13.431067693 +0000 UTC m=+0.140204522 container start e86a4f4221000e0187a703d3f371ae3d9e831e4044791035656e7bfed4268bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 05 10:37:13 compute-0 podman[295552]: 2025-12-05 10:37:13.434926008 +0000 UTC m=+0.144062837 container attach e86a4f4221000e0187a703d3f371ae3d9e831e4044791035656e7bfed4268bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:37:13 compute-0 practical_gates[295568]: 167 167
Dec 05 10:37:13 compute-0 systemd[1]: libpod-e86a4f4221000e0187a703d3f371ae3d9e831e4044791035656e7bfed4268bb8.scope: Deactivated successfully.
Dec 05 10:37:13 compute-0 podman[295552]: 2025-12-05 10:37:13.440312145 +0000 UTC m=+0.149448984 container died e86a4f4221000e0187a703d3f371ae3d9e831e4044791035656e7bfed4268bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gates, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-48350a7113ae7ec133b73ce296daaedd7411973e83bc3a993b6b731e5d6e4ca7-merged.mount: Deactivated successfully.
Dec 05 10:37:13 compute-0 podman[295552]: 2025-12-05 10:37:13.485695568 +0000 UTC m=+0.194832397 container remove e86a4f4221000e0187a703d3f371ae3d9e831e4044791035656e7bfed4268bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gates, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 05 10:37:13 compute-0 systemd[1]: libpod-conmon-e86a4f4221000e0187a703d3f371ae3d9e831e4044791035656e7bfed4268bb8.scope: Deactivated successfully.
Dec 05 10:37:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1448: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:37:13 compute-0 podman[295591]: 2025-12-05 10:37:13.67745651 +0000 UTC m=+0.045195409 container create 681e939c6a4c0e1d0eea6a143c27046c0d50ab40052d8c4614eebd60b734d1d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:37:13 compute-0 systemd[1]: Started libpod-conmon-681e939c6a4c0e1d0eea6a143c27046c0d50ab40052d8c4614eebd60b734d1d7.scope.
Dec 05 10:37:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770c290e46b48fdc1b6ebd6a893e279b63216b69fd7e1a867a8995b154255339/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770c290e46b48fdc1b6ebd6a893e279b63216b69fd7e1a867a8995b154255339/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770c290e46b48fdc1b6ebd6a893e279b63216b69fd7e1a867a8995b154255339/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770c290e46b48fdc1b6ebd6a893e279b63216b69fd7e1a867a8995b154255339/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:37:13 compute-0 podman[295591]: 2025-12-05 10:37:13.658563677 +0000 UTC m=+0.026302606 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:37:13 compute-0 podman[295591]: 2025-12-05 10:37:13.764439294 +0000 UTC m=+0.132178203 container init 681e939c6a4c0e1d0eea6a143c27046c0d50ab40052d8c4614eebd60b734d1d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_meninsky, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:37:13 compute-0 podman[295591]: 2025-12-05 10:37:13.773679666 +0000 UTC m=+0.141418565 container start 681e939c6a4c0e1d0eea6a143c27046c0d50ab40052d8c4614eebd60b734d1d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 10:37:13 compute-0 podman[295591]: 2025-12-05 10:37:13.777685934 +0000 UTC m=+0.145424833 container attach 681e939c6a4c0e1d0eea6a143c27046c0d50ab40052d8c4614eebd60b734d1d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:37:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:13.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:13.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:13 compute-0 nova_compute[257087]: 2025-12-05 10:37:13.950 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:37:13 compute-0 nova_compute[257087]: 2025-12-05 10:37:13.951 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:13 compute-0 nova_compute[257087]: 2025-12-05 10:37:13.951 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:37:13 compute-0 nova_compute[257087]: 2025-12-05 10:37:13.951 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:37:13 compute-0 nova_compute[257087]: 2025-12-05 10:37:13.952 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:37:13 compute-0 nova_compute[257087]: 2025-12-05 10:37:13.953 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:37:14 compute-0 flamboyant_meninsky[295607]: {}
Dec 05 10:37:14 compute-0 lvm[295682]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:37:14 compute-0 lvm[295682]: VG ceph_vg0 finished
Dec 05 10:37:14 compute-0 systemd[1]: libpod-681e939c6a4c0e1d0eea6a143c27046c0d50ab40052d8c4614eebd60b734d1d7.scope: Deactivated successfully.
Dec 05 10:37:14 compute-0 systemd[1]: libpod-681e939c6a4c0e1d0eea6a143c27046c0d50ab40052d8c4614eebd60b734d1d7.scope: Consumed 1.221s CPU time.
Dec 05 10:37:14 compute-0 podman[295591]: 2025-12-05 10:37:14.537352833 +0000 UTC m=+0.905091752 container died 681e939c6a4c0e1d0eea6a143c27046c0d50ab40052d8c4614eebd60b734d1d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-770c290e46b48fdc1b6ebd6a893e279b63216b69fd7e1a867a8995b154255339-merged.mount: Deactivated successfully.
Dec 05 10:37:14 compute-0 podman[295591]: 2025-12-05 10:37:14.594959639 +0000 UTC m=+0.962698538 container remove 681e939c6a4c0e1d0eea6a143c27046c0d50ab40052d8c4614eebd60b734d1d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_meninsky, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 10:37:14 compute-0 systemd[1]: libpod-conmon-681e939c6a4c0e1d0eea6a143c27046c0d50ab40052d8c4614eebd60b734d1d7.scope: Deactivated successfully.
Dec 05 10:37:14 compute-0 sudo[295487]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:37:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:14 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:37:14 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:14 compute-0 sudo[295700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:37:14 compute-0 sudo[295700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:14 compute-0 sudo[295700]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:15.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:15 compute-0 ceph-mon[74418]: pgmap v1448: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:37:15 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:15 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:37:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:15] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:37:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:15] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:37:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1449: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:37:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:15.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:16 compute-0 ceph-mon[74418]: pgmap v1449: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:37:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:17.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:17 compute-0 nova_compute[257087]: 2025-12-05 10:37:17.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:37:17 compute-0 nova_compute[257087]: 2025-12-05 10:37:17.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:37:17 compute-0 nova_compute[257087]: 2025-12-05 10:37:17.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:37:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:17.582Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:17 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1450: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:37:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:17.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:18 compute-0 podman[295727]: 2025-12-05 10:37:18.446424576 +0000 UTC m=+0.104363378 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:37:18 compute-0 podman[295729]: 2025-12-05 10:37:18.454894656 +0000 UTC m=+0.112824378 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:37:18 compute-0 podman[295728]: 2025-12-05 10:37:18.485338564 +0000 UTC m=+0.139553524 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:37:18 compute-0 ceph-mon[74418]: pgmap v1450: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 05 10:37:18 compute-0 sudo[295792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:37:18 compute-0 sudo[295792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:18 compute-0 sudo[295792]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:18.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:37:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:18.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:18 compute-0 nova_compute[257087]: 2025-12-05 10:37:18.955 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:19.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:19 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1451: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:37:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:19.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:20 compute-0 nova_compute[257087]: 2025-12-05 10:37:20.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:37:20 compute-0 nova_compute[257087]: 2025-12-05 10:37:20.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:37:20 compute-0 nova_compute[257087]: 2025-12-05 10:37:20.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:37:20 compute-0 nova_compute[257087]: 2025-12-05 10:37:20.558 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:37:20 compute-0 nova_compute[257087]: 2025-12-05 10:37:20.558 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:37:20 compute-0 nova_compute[257087]: 2025-12-05 10:37:20.559 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:37:20 compute-0 nova_compute[257087]: 2025-12-05 10:37:20.559 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:37:20 compute-0 nova_compute[257087]: 2025-12-05 10:37:20.559 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:37:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:37:20.600 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:37:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:37:20.600 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:37:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:37:20.600 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:37:20 compute-0 ceph-mon[74418]: pgmap v1451: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 05 10:37:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:37:21 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3765363089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.094 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:37:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:21.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.284 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.285 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4456MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.285 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.286 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.351 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.352 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.376 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:37:21 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1452: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:37:21 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1583704583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.831 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.839 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.856 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.858 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:37:21 compute-0 nova_compute[257087]: 2025-12-05 10:37:21.858 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:37:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3765363089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:37:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1121363769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:37:21 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1583704583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:37:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:21.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:23.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:23 compute-0 ceph-mon[74418]: pgmap v1452: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:23 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3057347829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:37:23 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/962514327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:37:23 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1453: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:23.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:23.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:23 compute-0 nova_compute[257087]: 2025-12-05 10:37:23.956 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:24 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3807026240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:37:24 compute-0 ceph-mon[74418]: pgmap v1453: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:25.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:25] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:37:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:25] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:37:25 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1454: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:37:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:25.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:26 compute-0 nova_compute[257087]: 2025-12-05 10:37:26.858 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:37:26 compute-0 nova_compute[257087]: 2025-12-05 10:37:26.859 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:37:26 compute-0 nova_compute[257087]: 2025-12-05 10:37:26.859 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:37:26 compute-0 ceph-mon[74418]: pgmap v1454: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:37:26 compute-0 nova_compute[257087]: 2025-12-05 10:37:26.909 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:37:26 compute-0 nova_compute[257087]: 2025-12-05 10:37:26.910 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:37:26 compute-0 nova_compute[257087]: 2025-12-05 10:37:26.911 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:37:26 compute-0 nova_compute[257087]: 2025-12-05 10:37:26.911 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:37:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:27.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:27.584Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:37:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1455: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:37:27
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'backups', '.rgw.root', 'default.rgw.meta', '.nfs', 'volumes', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta']
Dec 05 10:37:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:37:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:27.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:28 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:37:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:37:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:28.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:28 compute-0 nova_compute[257087]: 2025-12-05 10:37:28.959 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:29.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:29 compute-0 ceph-mon[74418]: pgmap v1455: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 05 10:37:29 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1456: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 45 op/s
Dec 05 10:37:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:29.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:37:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:31.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:37:31 compute-0 ceph-mon[74418]: pgmap v1456: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 45 op/s
Dec 05 10:37:31 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1457: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Dec 05 10:37:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:31.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:32 compute-0 ceph-mon[74418]: pgmap v1457: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Dec 05 10:37:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:33.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:33 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1458: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 05 10:37:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:33.862Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:37:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:33.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:33.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:33 compute-0 nova_compute[257087]: 2025-12-05 10:37:33.961 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:33 compute-0 nova_compute[257087]: 2025-12-05 10:37:33.962 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:34 compute-0 ceph-mon[74418]: pgmap v1458: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 05 10:37:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:35.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:35] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:37:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:35] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec 05 10:37:35 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1459: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 0 B/s wr, 105 op/s
Dec 05 10:37:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:35.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:36 compute-0 ceph-mon[74418]: pgmap v1459: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 0 B/s wr, 105 op/s
Dec 05 10:37:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:37.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:37.585Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:37 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1460: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Dec 05 10:37:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:37:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:37.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:37:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:38 compute-0 ceph-mon[74418]: pgmap v1460: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Dec 05 10:37:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:38.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:38 compute-0 nova_compute[257087]: 2025-12-05 10:37:38.962 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:38 compute-0 sudo[295881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:37:38 compute-0 sudo[295881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:38 compute-0 sudo[295881]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:39.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:39 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1461: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 0 B/s wr, 105 op/s
Dec 05 10:37:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:40 compute-0 ceph-mon[74418]: pgmap v1461: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 0 B/s wr, 105 op/s
Dec 05 10:37:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:41.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:41 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1462: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Dec 05 10:37:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:41.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:37:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:37:42 compute-0 ceph-mon[74418]: pgmap v1462: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Dec 05 10:37:42 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:37:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:43.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:43 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1463: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Dec 05 10:37:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:43.863Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 05 10:37:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:43.863Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:37:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:43.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:37:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:43.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:43 compute-0 nova_compute[257087]: 2025-12-05 10:37:43.965 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:45 compute-0 ceph-mon[74418]: pgmap v1463: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Dec 05 10:37:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:45] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:37:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:45] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:37:45 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1464: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Dec 05 10:37:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:45.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:47 compute-0 ceph-mon[74418]: pgmap v1464: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Dec 05 10:37:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:47.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:47.585Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:47 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1465: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:37:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:47.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:48.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:48 compute-0 nova_compute[257087]: 2025-12-05 10:37:48.967 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:37:48 compute-0 nova_compute[257087]: 2025-12-05 10:37:48.968 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:48 compute-0 nova_compute[257087]: 2025-12-05 10:37:48.968 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:37:48 compute-0 nova_compute[257087]: 2025-12-05 10:37:48.968 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:37:48 compute-0 nova_compute[257087]: 2025-12-05 10:37:48.969 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:37:48 compute-0 nova_compute[257087]: 2025-12-05 10:37:48.971 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:37:49 compute-0 ceph-mon[74418]: pgmap v1465: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 05 10:37:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:49.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:49 compute-0 podman[295916]: 2025-12-05 10:37:49.419867521 +0000 UTC m=+0.077135387 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:37:49 compute-0 podman[295918]: 2025-12-05 10:37:49.429012199 +0000 UTC m=+0.087320904 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:37:49 compute-0 podman[295917]: 2025-12-05 10:37:49.463037875 +0000 UTC m=+0.122806189 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:37:49 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1466: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:37:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:49.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:51 compute-0 ceph-mon[74418]: pgmap v1466: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 05 10:37:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:51.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:51 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1467: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:51.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:53 compute-0 ceph-mon[74418]: pgmap v1467: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:53.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:53 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1468: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:53.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:53.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:53 compute-0 nova_compute[257087]: 2025-12-05 10:37:53.971 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:37:53 compute-0 nova_compute[257087]: 2025-12-05 10:37:53.973 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:37:53 compute-0 nova_compute[257087]: 2025-12-05 10:37:53.973 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:37:53 compute-0 nova_compute[257087]: 2025-12-05 10:37:53.974 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:37:53 compute-0 nova_compute[257087]: 2025-12-05 10:37:53.994 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:53 compute-0 nova_compute[257087]: 2025-12-05 10:37:53.995 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:37:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:55.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:55 compute-0 ceph-mon[74418]: pgmap v1468: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:55] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:37:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:37:55] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec 05 10:37:55 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1469: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:37:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:55.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:37:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3023187864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:37:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:37:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3023187864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:37:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:57.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:57 compute-0 ceph-mon[74418]: pgmap v1469: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:37:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3023187864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:37:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/3023187864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:37:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:37:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:57.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:37:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:37:57 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1470: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:37:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:37:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:37:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:37:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:37:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:37:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:57.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:37:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:37:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:37:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:37:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:37:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:37:58 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:37:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:37:58.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:37:58 compute-0 nova_compute[257087]: 2025-12-05 10:37:58.995 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:58 compute-0 nova_compute[257087]: 2025-12-05 10:37:58.996 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:37:59 compute-0 sudo[295991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:37:59 compute-0 sudo[295991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:37:59 compute-0 sudo[295991]: pam_unix(sudo:session): session closed for user root
Dec 05 10:37:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:37:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:37:59.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:37:59 compute-0 ceph-mon[74418]: pgmap v1470: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:37:59 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1471: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:37:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:37:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:37:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:37:59.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:00 compute-0 nova_compute[257087]: 2025-12-05 10:38:00.788 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:01.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:01 compute-0 ceph-mon[74418]: pgmap v1471: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:01 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1472: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:03.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:03 compute-0 ceph-mon[74418]: pgmap v1472: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:03 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1473: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:03.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:03.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:03 compute-0 nova_compute[257087]: 2025-12-05 10:38:03.998 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:38:04 compute-0 nova_compute[257087]: 2025-12-05 10:38:03.999 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:04 compute-0 nova_compute[257087]: 2025-12-05 10:38:03.999 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:38:04 compute-0 nova_compute[257087]: 2025-12-05 10:38:04.000 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:38:04 compute-0 nova_compute[257087]: 2025-12-05 10:38:04.000 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:38:04 compute-0 ceph-mon[74418]: pgmap v1473: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:05.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:05 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:05] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:38:05 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:05] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:38:05 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1474: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:05 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:05 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:05 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:05.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:06 compute-0 ceph-mon[74418]: pgmap v1474: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:07.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:07 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:07 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:07.588Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:07 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1475: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:07 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:07 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:38:07 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:38:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:07 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:08 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:08 compute-0 ceph-mon[74418]: pgmap v1475: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:08 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:08.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:09 compute-0 nova_compute[257087]: 2025-12-05 10:38:08.999 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:09.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:09 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1476: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:09 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:09 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:09 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:09.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:10 compute-0 ceph-mon[74418]: pgmap v1476: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:11.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:11 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1477: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:11 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:11 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:11 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:11.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:12 compute-0 nova_compute[257087]: 2025-12-05 10:38:12.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:12 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:38:12 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:38:13 compute-0 ceph-mon[74418]: pgmap v1477: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:13 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:38:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:12 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:13 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:13.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:13 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1478: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:13 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:13.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:13 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:13 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:38:13 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:13.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:38:14 compute-0 nova_compute[257087]: 2025-12-05 10:38:14.002 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:38:14 compute-0 nova_compute[257087]: 2025-12-05 10:38:14.004 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:38:14 compute-0 nova_compute[257087]: 2025-12-05 10:38:14.005 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:38:14 compute-0 nova_compute[257087]: 2025-12-05 10:38:14.005 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:38:14 compute-0 nova_compute[257087]: 2025-12-05 10:38:14.049 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:14 compute-0 nova_compute[257087]: 2025-12-05 10:38:14.050 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:38:15 compute-0 sudo[296033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:38:15 compute-0 sudo[296033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:15 compute-0 ceph-mon[74418]: pgmap v1478: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:15 compute-0 sudo[296033]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:15 compute-0 sudo[296058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 05 10:38:15 compute-0 sudo[296058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:15.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 05 10:38:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:15 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 05 10:38:15 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:15 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:15] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:38:15 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:15] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:38:15 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1479: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:15 compute-0 sudo[296058]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:15 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:15 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:15 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:15.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:38:16 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:38:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:38:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:38:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1480: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:38:16 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1481: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec 05 10:38:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:38:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 05 10:38:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:38:16 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:38:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 05 10:38:16 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:38:16 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:38:16 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:38:16 compute-0 sudo[296115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:38:16 compute-0 sudo[296115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:16 compute-0 sudo[296115]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:16 compute-0 sudo[296140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 05 10:38:16 compute-0 sudo[296140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:16 compute-0 ceph-mon[74418]: pgmap v1479: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:38:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 10:38:16 compute-0 ceph-mon[74418]: pgmap v1480: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 05 10:38:16 compute-0 ceph-mon[74418]: pgmap v1481: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec 05 10:38:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 10:38:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 10:38:16 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:38:16 compute-0 podman[296207]: 2025-12-05 10:38:16.832065329 +0000 UTC m=+0.063349073 container create b04877ec17ade707fccdd8223106fbfbfc0ae9a2bdfb50cb42c62a4333bc9d24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dijkstra, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 10:38:16 compute-0 systemd[1]: Started libpod-conmon-b04877ec17ade707fccdd8223106fbfbfc0ae9a2bdfb50cb42c62a4333bc9d24.scope.
Dec 05 10:38:16 compute-0 podman[296207]: 2025-12-05 10:38:16.799015961 +0000 UTC m=+0.030299785 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:38:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:38:16 compute-0 podman[296207]: 2025-12-05 10:38:16.955768502 +0000 UTC m=+0.187052296 container init b04877ec17ade707fccdd8223106fbfbfc0ae9a2bdfb50cb42c62a4333bc9d24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dijkstra, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 05 10:38:16 compute-0 podman[296207]: 2025-12-05 10:38:16.969126404 +0000 UTC m=+0.200410178 container start b04877ec17ade707fccdd8223106fbfbfc0ae9a2bdfb50cb42c62a4333bc9d24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 05 10:38:16 compute-0 podman[296207]: 2025-12-05 10:38:16.974361357 +0000 UTC m=+0.205645121 container attach b04877ec17ade707fccdd8223106fbfbfc0ae9a2bdfb50cb42c62a4333bc9d24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 05 10:38:16 compute-0 frosty_dijkstra[296223]: 167 167
Dec 05 10:38:16 compute-0 systemd[1]: libpod-b04877ec17ade707fccdd8223106fbfbfc0ae9a2bdfb50cb42c62a4333bc9d24.scope: Deactivated successfully.
Dec 05 10:38:16 compute-0 podman[296207]: 2025-12-05 10:38:16.978028156 +0000 UTC m=+0.209311920 container died b04877ec17ade707fccdd8223106fbfbfc0ae9a2bdfb50cb42c62a4333bc9d24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dijkstra, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e9e2a4d008eab0989704645224c599195d50ab18f59629850f6f53c4b675762-merged.mount: Deactivated successfully.
Dec 05 10:38:17 compute-0 podman[296207]: 2025-12-05 10:38:17.026056052 +0000 UTC m=+0.257339786 container remove b04877ec17ade707fccdd8223106fbfbfc0ae9a2bdfb50cb42c62a4333bc9d24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 10:38:17 compute-0 systemd[1]: libpod-conmon-b04877ec17ade707fccdd8223106fbfbfc0ae9a2bdfb50cb42c62a4333bc9d24.scope: Deactivated successfully.
Dec 05 10:38:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:17.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:17 compute-0 podman[296246]: 2025-12-05 10:38:17.273611841 +0000 UTC m=+0.054907244 container create e523eb6df2676372f932e751435b30a4fe86fffca83e901af8d3d6e60b350934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lichterman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:38:17 compute-0 systemd[1]: Started libpod-conmon-e523eb6df2676372f932e751435b30a4fe86fffca83e901af8d3d6e60b350934.scope.
Dec 05 10:38:17 compute-0 podman[296246]: 2025-12-05 10:38:17.251755726 +0000 UTC m=+0.033051149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:38:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9531770b82de7ac84174d1a22515846290ccf6d8945de2ea9e7a27081427a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9531770b82de7ac84174d1a22515846290ccf6d8945de2ea9e7a27081427a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9531770b82de7ac84174d1a22515846290ccf6d8945de2ea9e7a27081427a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9531770b82de7ac84174d1a22515846290ccf6d8945de2ea9e7a27081427a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9531770b82de7ac84174d1a22515846290ccf6d8945de2ea9e7a27081427a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:17 compute-0 podman[296246]: 2025-12-05 10:38:17.371706467 +0000 UTC m=+0.153001930 container init e523eb6df2676372f932e751435b30a4fe86fffca83e901af8d3d6e60b350934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lichterman, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:38:17 compute-0 podman[296246]: 2025-12-05 10:38:17.379459958 +0000 UTC m=+0.160755371 container start e523eb6df2676372f932e751435b30a4fe86fffca83e901af8d3d6e60b350934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 10:38:17 compute-0 podman[296246]: 2025-12-05 10:38:17.383263751 +0000 UTC m=+0.164559164 container attach e523eb6df2676372f932e751435b30a4fe86fffca83e901af8d3d6e60b350934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 10:38:17 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:17 compute-0 nova_compute[257087]: 2025-12-05 10:38:17.549 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:17 compute-0 nova_compute[257087]: 2025-12-05 10:38:17.555 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:17 compute-0 nova_compute[257087]: 2025-12-05 10:38:17.555 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 10:38:17 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:17.590Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:17 compute-0 frosty_lichterman[296262]: --> passed data devices: 0 physical, 1 LVM
Dec 05 10:38:17 compute-0 frosty_lichterman[296262]: --> All data devices are unavailable
Dec 05 10:38:17 compute-0 systemd[1]: libpod-e523eb6df2676372f932e751435b30a4fe86fffca83e901af8d3d6e60b350934.scope: Deactivated successfully.
Dec 05 10:38:17 compute-0 conmon[296262]: conmon e523eb6df2676372f932 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e523eb6df2676372f932e751435b30a4fe86fffca83e901af8d3d6e60b350934.scope/container/memory.events
Dec 05 10:38:17 compute-0 podman[296246]: 2025-12-05 10:38:17.774352491 +0000 UTC m=+0.555647894 container died e523eb6df2676372f932e751435b30a4fe86fffca83e901af8d3d6e60b350934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lichterman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e9531770b82de7ac84174d1a22515846290ccf6d8945de2ea9e7a27081427a3-merged.mount: Deactivated successfully.
Dec 05 10:38:17 compute-0 podman[296246]: 2025-12-05 10:38:17.824726861 +0000 UTC m=+0.606022274 container remove e523eb6df2676372f932e751435b30a4fe86fffca83e901af8d3d6e60b350934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lichterman, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:38:17 compute-0 systemd[1]: libpod-conmon-e523eb6df2676372f932e751435b30a4fe86fffca83e901af8d3d6e60b350934.scope: Deactivated successfully.
Dec 05 10:38:17 compute-0 sudo[296140]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:17 compute-0 sudo[296287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:38:17 compute-0 sudo[296287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:17 compute-0 sudo[296287]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:17 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:17 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:17 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:17 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:18 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:18 compute-0 sudo[296312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- lvm list --format json
Dec 05 10:38:18 compute-0 sudo[296312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:18 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1482: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 725 B/s rd, 0 op/s
Dec 05 10:38:18 compute-0 podman[296379]: 2025-12-05 10:38:18.475496539 +0000 UTC m=+0.053117515 container create eaf3555e09e0671953fb0300140be911e8e6fe1723fa798573d854d30df78c9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:38:18 compute-0 systemd[1]: Started libpod-conmon-eaf3555e09e0671953fb0300140be911e8e6fe1723fa798573d854d30df78c9f.scope.
Dec 05 10:38:18 compute-0 podman[296379]: 2025-12-05 10:38:18.44832645 +0000 UTC m=+0.025947506 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:38:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:38:18 compute-0 podman[296379]: 2025-12-05 10:38:18.572337761 +0000 UTC m=+0.149958797 container init eaf3555e09e0671953fb0300140be911e8e6fe1723fa798573d854d30df78c9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 05 10:38:18 compute-0 podman[296379]: 2025-12-05 10:38:18.579913907 +0000 UTC m=+0.157534883 container start eaf3555e09e0671953fb0300140be911e8e6fe1723fa798573d854d30df78c9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:38:18 compute-0 podman[296379]: 2025-12-05 10:38:18.584290376 +0000 UTC m=+0.161911372 container attach eaf3555e09e0671953fb0300140be911e8e6fe1723fa798573d854d30df78c9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 10:38:18 compute-0 distracted_torvalds[296396]: 167 167
Dec 05 10:38:18 compute-0 systemd[1]: libpod-eaf3555e09e0671953fb0300140be911e8e6fe1723fa798573d854d30df78c9f.scope: Deactivated successfully.
Dec 05 10:38:18 compute-0 podman[296379]: 2025-12-05 10:38:18.586596209 +0000 UTC m=+0.164217175 container died eaf3555e09e0671953fb0300140be911e8e6fe1723fa798573d854d30df78c9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Dec 05 10:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-497e1bbebbe4d1f9fb5366006f361f0fa8baf4f81e690aacd20b786411e92158-merged.mount: Deactivated successfully.
Dec 05 10:38:18 compute-0 podman[296379]: 2025-12-05 10:38:18.637278466 +0000 UTC m=+0.214899432 container remove eaf3555e09e0671953fb0300140be911e8e6fe1723fa798573d854d30df78c9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 05 10:38:18 compute-0 systemd[1]: libpod-conmon-eaf3555e09e0671953fb0300140be911e8e6fe1723fa798573d854d30df78c9f.scope: Deactivated successfully.
Dec 05 10:38:18 compute-0 nova_compute[257087]: 2025-12-05 10:38:18.732 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:18 compute-0 podman[296423]: 2025-12-05 10:38:18.851317114 +0000 UTC m=+0.064827733 container create afe2e5d4d0158acee300b0ed052ddecc4d8a84514e7f1d22bfe9784d03bb4326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_burnell, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 05 10:38:18 compute-0 systemd[1]: Started libpod-conmon-afe2e5d4d0158acee300b0ed052ddecc4d8a84514e7f1d22bfe9784d03bb4326.scope.
Dec 05 10:38:18 compute-0 podman[296423]: 2025-12-05 10:38:18.827327952 +0000 UTC m=+0.040838571 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:38:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54b7470af54f3bd75b8492d537a610886b1e63b37083b5117035db7fa17a407/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54b7470af54f3bd75b8492d537a610886b1e63b37083b5117035db7fa17a407/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54b7470af54f3bd75b8492d537a610886b1e63b37083b5117035db7fa17a407/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54b7470af54f3bd75b8492d537a610886b1e63b37083b5117035db7fa17a407/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:18 compute-0 podman[296423]: 2025-12-05 10:38:18.955451144 +0000 UTC m=+0.168961773 container init afe2e5d4d0158acee300b0ed052ddecc4d8a84514e7f1d22bfe9784d03bb4326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 10:38:18 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:18.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:18 compute-0 podman[296423]: 2025-12-05 10:38:18.969339182 +0000 UTC m=+0.182849781 container start afe2e5d4d0158acee300b0ed052ddecc4d8a84514e7f1d22bfe9784d03bb4326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_burnell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:38:18 compute-0 podman[296423]: 2025-12-05 10:38:18.974656366 +0000 UTC m=+0.188167015 container attach afe2e5d4d0158acee300b0ed052ddecc4d8a84514e7f1d22bfe9784d03bb4326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 05 10:38:19 compute-0 nova_compute[257087]: 2025-12-05 10:38:19.049 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:19 compute-0 nova_compute[257087]: 2025-12-05 10:38:19.051 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:38:19 compute-0 sudo[296444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:38:19 compute-0 sudo[296444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:19 compute-0 sudo[296444]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:19.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]: {
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:     "1": [
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:         {
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             "devices": [
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "/dev/loop3"
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             ],
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             "lv_name": "ceph_lv0",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             "lv_size": "21470642176",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3c63ce0f-5206-59ae-8381-b67d0b6424b5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f2cb7ff3-5059-40ee-ae0a-c37b437655e2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             "lv_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             "name": "ceph_lv0",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             "tags": {
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.block_uuid": "FHddET-gOrG-5fxT-teaY-fjAD-Re3j-hkbOcE",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.cluster_fsid": "3c63ce0f-5206-59ae-8381-b67d0b6424b5",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.cluster_name": "ceph",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.crush_device_class": "",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.encrypted": "0",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.osd_fsid": "f2cb7ff3-5059-40ee-ae0a-c37b437655e2",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.osd_id": "1",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.type": "block",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.vdo": "0",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:                 "ceph.with_tpm": "0"
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             },
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             "type": "block",
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:             "vg_name": "ceph_vg0"
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:         }
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]:     ]
Dec 05 10:38:19 compute-0 nostalgic_burnell[296439]: }
Dec 05 10:38:19 compute-0 ceph-mon[74418]: pgmap v1482: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 725 B/s rd, 0 op/s
Dec 05 10:38:19 compute-0 systemd[1]: libpod-afe2e5d4d0158acee300b0ed052ddecc4d8a84514e7f1d22bfe9784d03bb4326.scope: Deactivated successfully.
Dec 05 10:38:19 compute-0 podman[296423]: 2025-12-05 10:38:19.316823146 +0000 UTC m=+0.530333755 container died afe2e5d4d0158acee300b0ed052ddecc4d8a84514e7f1d22bfe9784d03bb4326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_burnell, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 10:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b54b7470af54f3bd75b8492d537a610886b1e63b37083b5117035db7fa17a407-merged.mount: Deactivated successfully.
Dec 05 10:38:19 compute-0 podman[296423]: 2025-12-05 10:38:19.372400547 +0000 UTC m=+0.585911146 container remove afe2e5d4d0158acee300b0ed052ddecc4d8a84514e7f1d22bfe9784d03bb4326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_burnell, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 10:38:19 compute-0 systemd[1]: libpod-conmon-afe2e5d4d0158acee300b0ed052ddecc4d8a84514e7f1d22bfe9784d03bb4326.scope: Deactivated successfully.
Dec 05 10:38:19 compute-0 sudo[296312]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:19 compute-0 sudo[296485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:38:19 compute-0 sudo[296485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:19 compute-0 sudo[296485]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:19 compute-0 nova_compute[257087]: 2025-12-05 10:38:19.542 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:19 compute-0 nova_compute[257087]: 2025-12-05 10:38:19.543 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:19 compute-0 sudo[296528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3c63ce0f-5206-59ae-8381-b67d0b6424b5/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 3c63ce0f-5206-59ae-8381-b67d0b6424b5 -- raw list --format json
Dec 05 10:38:19 compute-0 sudo[296528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:19 compute-0 podman[296511]: 2025-12-05 10:38:19.613265524 +0000 UTC m=+0.071124074 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 10:38:19 compute-0 podman[296509]: 2025-12-05 10:38:19.633789522 +0000 UTC m=+0.096433272 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:38:19 compute-0 podman[296510]: 2025-12-05 10:38:19.681668234 +0000 UTC m=+0.141687223 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:38:19 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:19 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:19 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:19.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:20 compute-0 podman[296639]: 2025-12-05 10:38:20.066672698 +0000 UTC m=+0.060795623 container create c31017fd903a1fcf6d8c79ee6b59da22c310443137121ab67e6687b293ae24eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 05 10:38:20 compute-0 systemd[1]: Started libpod-conmon-c31017fd903a1fcf6d8c79ee6b59da22c310443137121ab67e6687b293ae24eb.scope.
Dec 05 10:38:20 compute-0 podman[296639]: 2025-12-05 10:38:20.042143321 +0000 UTC m=+0.036266246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:38:20 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1483: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 13 op/s
Dec 05 10:38:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:38:20 compute-0 podman[296639]: 2025-12-05 10:38:20.199260362 +0000 UTC m=+0.193383327 container init c31017fd903a1fcf6d8c79ee6b59da22c310443137121ab67e6687b293ae24eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 10:38:20 compute-0 podman[296639]: 2025-12-05 10:38:20.208987437 +0000 UTC m=+0.203110342 container start c31017fd903a1fcf6d8c79ee6b59da22c310443137121ab67e6687b293ae24eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec 05 10:38:20 compute-0 podman[296639]: 2025-12-05 10:38:20.213857149 +0000 UTC m=+0.207980074 container attach c31017fd903a1fcf6d8c79ee6b59da22c310443137121ab67e6687b293ae24eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 10:38:20 compute-0 nostalgic_hawking[296655]: 167 167
Dec 05 10:38:20 compute-0 systemd[1]: libpod-c31017fd903a1fcf6d8c79ee6b59da22c310443137121ab67e6687b293ae24eb.scope: Deactivated successfully.
Dec 05 10:38:20 compute-0 podman[296639]: 2025-12-05 10:38:20.218343381 +0000 UTC m=+0.212466306 container died c31017fd903a1fcf6d8c79ee6b59da22c310443137121ab67e6687b293ae24eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 10:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0f8fd1f5adbb242948e82f9a483ce51fbf800596faf5632475ccf5557b16789-merged.mount: Deactivated successfully.
Dec 05 10:38:20 compute-0 podman[296639]: 2025-12-05 10:38:20.268142935 +0000 UTC m=+0.262265830 container remove c31017fd903a1fcf6d8c79ee6b59da22c310443137121ab67e6687b293ae24eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 05 10:38:20 compute-0 systemd[1]: libpod-conmon-c31017fd903a1fcf6d8c79ee6b59da22c310443137121ab67e6687b293ae24eb.scope: Deactivated successfully.
Dec 05 10:38:20 compute-0 podman[296680]: 2025-12-05 10:38:20.470544536 +0000 UTC m=+0.052263721 container create fae4763133e5ffc3e0a635e9fd4b7bfc08369eaaec8cbb565697864f690d27bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 05 10:38:20 compute-0 systemd[1]: Started libpod-conmon-fae4763133e5ffc3e0a635e9fd4b7bfc08369eaaec8cbb565697864f690d27bb.scope.
Dec 05 10:38:20 compute-0 nova_compute[257087]: 2025-12-05 10:38:20.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:20 compute-0 podman[296680]: 2025-12-05 10:38:20.4486156 +0000 UTC m=+0.030334805 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 05 10:38:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 10:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a46005b84bf1d1757f8f9277869d466b8185b2f0a915ffdff72552132629233/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a46005b84bf1d1757f8f9277869d466b8185b2f0a915ffdff72552132629233/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a46005b84bf1d1757f8f9277869d466b8185b2f0a915ffdff72552132629233/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a46005b84bf1d1757f8f9277869d466b8185b2f0a915ffdff72552132629233/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:38:20 compute-0 podman[296680]: 2025-12-05 10:38:20.564082109 +0000 UTC m=+0.145801314 container init fae4763133e5ffc3e0a635e9fd4b7bfc08369eaaec8cbb565697864f690d27bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 10:38:20 compute-0 podman[296680]: 2025-12-05 10:38:20.57334053 +0000 UTC m=+0.155059755 container start fae4763133e5ffc3e0a635e9fd4b7bfc08369eaaec8cbb565697864f690d27bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:38:20 compute-0 podman[296680]: 2025-12-05 10:38:20.579500337 +0000 UTC m=+0.161219542 container attach fae4763133e5ffc3e0a635e9fd4b7bfc08369eaaec8cbb565697864f690d27bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 10:38:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:38:20.601 165250 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:38:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:38:20.602 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:38:20 compute-0 ovn_metadata_agent[165238]: 2025-12-05 10:38:20.602 165250 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:38:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:21.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:21 compute-0 ceph-mon[74418]: pgmap v1483: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 13 op/s
Dec 05 10:38:21 compute-0 lvm[296773]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:38:21 compute-0 lvm[296773]: VG ceph_vg0 finished
Dec 05 10:38:21 compute-0 intelligent_hellman[296698]: {}
Dec 05 10:38:21 compute-0 systemd[1]: libpod-fae4763133e5ffc3e0a635e9fd4b7bfc08369eaaec8cbb565697864f690d27bb.scope: Deactivated successfully.
Dec 05 10:38:21 compute-0 systemd[1]: libpod-fae4763133e5ffc3e0a635e9fd4b7bfc08369eaaec8cbb565697864f690d27bb.scope: Consumed 1.403s CPU time.
Dec 05 10:38:21 compute-0 podman[296680]: 2025-12-05 10:38:21.42392318 +0000 UTC m=+1.005642395 container died fae4763133e5ffc3e0a635e9fd4b7bfc08369eaaec8cbb565697864f690d27bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 10:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a46005b84bf1d1757f8f9277869d466b8185b2f0a915ffdff72552132629233-merged.mount: Deactivated successfully.
Dec 05 10:38:21 compute-0 podman[296680]: 2025-12-05 10:38:21.483006226 +0000 UTC m=+1.064725411 container remove fae4763133e5ffc3e0a635e9fd4b7bfc08369eaaec8cbb565697864f690d27bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 10:38:21 compute-0 systemd[1]: libpod-conmon-fae4763133e5ffc3e0a635e9fd4b7bfc08369eaaec8cbb565697864f690d27bb.scope: Deactivated successfully.
Dec 05 10:38:21 compute-0 sudo[296528]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 05 10:38:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:21 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 05 10:38:21 compute-0 ceph-mon[74418]: log_channel(audit) log [INF] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:21 compute-0 sudo[296789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:38:21 compute-0 sudo[296789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:21 compute-0 sudo[296789]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:21 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:21 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:21 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:22 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1484: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 13 op/s
Dec 05 10:38:22 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:22 compute-0 nova_compute[257087]: 2025-12-05 10:38:22.528 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:22 compute-0 nova_compute[257087]: 2025-12-05 10:38:22.529 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:22 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:22 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' 
Dec 05 10:38:22 compute-0 ceph-mon[74418]: pgmap v1484: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 13 op/s
Dec 05 10:38:22 compute-0 nova_compute[257087]: 2025-12-05 10:38:22.559 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:38:22 compute-0 nova_compute[257087]: 2025-12-05 10:38:22.560 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:38:22 compute-0 nova_compute[257087]: 2025-12-05 10:38:22.560 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:38:22 compute-0 nova_compute[257087]: 2025-12-05 10:38:22.560 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:38:22 compute-0 nova_compute[257087]: 2025-12-05 10:38:22.561 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:38:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:22 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:23 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:23 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:38:23 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2597360598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.089 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:38:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:23.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.290 257094 WARNING nova.virt.libvirt.driver [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.292 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4416MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.292 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.292 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:38:23 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2597360598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:38:23 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3228342909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:38:23 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3674149317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.646 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.646 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.760 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing inventories for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.792 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Updating ProviderTree inventory for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.793 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Updating inventory in ProviderTree for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.869 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing aggregate associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:38:23 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:23.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.898 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Refreshing trait associations for resource provider bad8518e-442e-4fc2-b7f3-2c453f1840d6, traits: HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:38:23 compute-0 nova_compute[257087]: 2025-12-05 10:38:23.914 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:38:23 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:23 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:38:23 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:23.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:38:24 compute-0 nova_compute[257087]: 2025-12-05 10:38:24.052 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:24 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1485: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Dec 05 10:38:24 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:38:24 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827845722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:38:24 compute-0 nova_compute[257087]: 2025-12-05 10:38:24.391 257094 DEBUG oslo_concurrency.processutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:38:24 compute-0 nova_compute[257087]: 2025-12-05 10:38:24.396 257094 DEBUG nova.compute.provider_tree [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed in ProviderTree for provider: bad8518e-442e-4fc2-b7f3-2c453f1840d6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:38:24 compute-0 nova_compute[257087]: 2025-12-05 10:38:24.414 257094 DEBUG nova.scheduler.client.report [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Inventory has not changed for provider bad8518e-442e-4fc2-b7f3-2c453f1840d6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:38:24 compute-0 nova_compute[257087]: 2025-12-05 10:38:24.416 257094 DEBUG nova.compute.resource_tracker [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:38:24 compute-0 nova_compute[257087]: 2025-12-05 10:38:24.417 257094 DEBUG oslo_concurrency.lockutils [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:38:24 compute-0 ceph-mon[74418]: pgmap v1485: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Dec 05 10:38:24 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2712659673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:38:24 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1908569916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:38:24 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1827845722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 10:38:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:25.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:25 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:25] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:38:25 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:25] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec 05 10:38:25 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:25 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:25 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:25.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:26 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1486: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 05 10:38:26 compute-0 nova_compute[257087]: 2025-12-05 10:38:26.530 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:26 compute-0 nova_compute[257087]: 2025-12-05 10:38:26.530 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:38:26 compute-0 nova_compute[257087]: 2025-12-05 10:38:26.530 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:38:26 compute-0 ceph-mon[74418]: pgmap v1486: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 05 10:38:26 compute-0 nova_compute[257087]: 2025-12-05 10:38:26.553 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:38:26 compute-0 nova_compute[257087]: 2025-12-05 10:38:26.553 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:26 compute-0 nova_compute[257087]: 2025-12-05 10:38:26.553 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:38:26 compute-0 nova_compute[257087]: 2025-12-05 10:38:26.554 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:26 compute-0 nova_compute[257087]: 2025-12-05 10:38:26.554 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 10:38:26 compute-0 nova_compute[257087]: 2025-12-05 10:38:26.585 257094 DEBUG nova.compute.manager [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 10:38:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:38:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:27.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:38:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:27 compute-0 nova_compute[257087]: 2025-12-05 10:38:27.561 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:27.592Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:38:27 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:27.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:27 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:38:27 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:38:27 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:38:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:38:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:38:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:38:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:38:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:38:27 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:38:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Optimize plan auto_2025-12-05_10:38:27
Dec 05 10:38:27 compute-0 ceph-mgr[74711]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 10:38:27 compute-0 ceph-mgr[74711]: [balancer INFO root] do_upmap
Dec 05 10:38:27 compute-0 ceph-mgr[74711]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.meta', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'vms', '.nfs', 'default.rgw.control']
Dec 05 10:38:27 compute-0 ceph-mgr[74711]: [balancer INFO root] prepared 0/10 upmap changes
Dec 05 10:38:27 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:27 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:27 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:27.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:27 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:28 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1487: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 10:38:28 compute-0 ceph-mgr[74711]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 10:38:28 compute-0 nova_compute[257087]: 2025-12-05 10:38:28.524 257094 DEBUG oslo_service.periodic_task [None req-5a22f888-c471-4e6d-8139-d43cda088dc9 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:38:28 compute-0 ceph-mon[74418]: pgmap v1487: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Dec 05 10:38:28 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:28.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:29 compute-0 nova_compute[257087]: 2025-12-05 10:38:29.055 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:38:29 compute-0 nova_compute[257087]: 2025-12-05 10:38:29.057 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:38:29 compute-0 nova_compute[257087]: 2025-12-05 10:38:29.057 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:38:29 compute-0 nova_compute[257087]: 2025-12-05 10:38:29.057 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:38:29 compute-0 nova_compute[257087]: 2025-12-05 10:38:29.087 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:29 compute-0 nova_compute[257087]: 2025-12-05 10:38:29.088 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:38:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:29.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:29 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:29 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:29 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:29.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:30 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1488: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Dec 05 10:38:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:31.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:31 compute-0 ceph-mon[74418]: pgmap v1488: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Dec 05 10:38:31 compute-0 sshd-session[296868]: Accepted publickey for zuul from 192.168.122.10 port 55670 ssh2: ECDSA SHA256:guZXuNt6GkCNOymnzmP0DvpAmsNBC8dM16hc2phnb8c
Dec 05 10:38:31 compute-0 systemd-logind[789]: New session 59 of user zuul.
Dec 05 10:38:31 compute-0 systemd[1]: Started Session 59 of User zuul.
Dec 05 10:38:31 compute-0 sshd-session[296868]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 10:38:31 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:31 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:31 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:31.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:32 compute-0 sudo[296872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 05 10:38:32 compute-0 sudo[296872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 10:38:32 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1489: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Dec 05 10:38:32 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:32 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:33 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:33 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:33 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:33 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:33.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:33 compute-0 ceph-mon[74418]: pgmap v1489: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Dec 05 10:38:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:33.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:38:33 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:33.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:38:34 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:34 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:34 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:34.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:34 compute-0 nova_compute[257087]: 2025-12-05 10:38:34.088 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:34 compute-0 nova_compute[257087]: 2025-12-05 10:38:34.092 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:34 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1490: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Dec 05 10:38:34 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27515 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:34 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27748 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:34 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17565 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:35 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27524 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:35 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27754 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:35 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:35 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:35 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:35.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:35 compute-0 ceph-mon[74418]: pgmap v1490: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Dec 05 10:38:35 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17577 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:35 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:35] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:38:35 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:35] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec 05 10:38:35 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec 05 10:38:35 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/51854429' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:38:36 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:36 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:38:36 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:36.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:38:36 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1491: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 13 op/s
Dec 05 10:38:36 compute-0 ceph-mon[74418]: from='client.27515 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:36 compute-0 ceph-mon[74418]: from='client.27748 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:36 compute-0 ceph-mon[74418]: from='client.17565 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:36 compute-0 ceph-mon[74418]: from='client.27524 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:36 compute-0 ceph-mon[74418]: from='client.27754 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1971999875' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:38:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1618180446' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:38:36 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/51854429' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 10:38:37 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:37 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:37 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:37.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:37 compute-0 ceph-mon[74418]: from='client.17577 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:37 compute-0 ceph-mon[74418]: pgmap v1491: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 13 op/s
Dec 05 10:38:37 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:37 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:37.593Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:37 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:38 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:38 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:38 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:38 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:38.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:38 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1492: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:38 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:38.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:39 compute-0 nova_compute[257087]: 2025-12-05 10:38:39.093 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:39 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:39 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:39 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:39.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:39 compute-0 sudo[297160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:38:39 compute-0 sudo[297160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:39 compute-0 sudo[297160]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:39 compute-0 ovs-vsctl[297211]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 05 10:38:40 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:40 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:40 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:40.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:40 compute-0 ceph-mon[74418]: pgmap v1492: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:40 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1493: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:40 compute-0 virtqemud[256610]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 05 10:38:40 compute-0 virtqemud[256610]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 05 10:38:40 compute-0 virtqemud[256610]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 05 10:38:40 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27539 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:40 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27769 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:41 compute-0 ceph-mon[74418]: pgmap v1493: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 05 10:38:41 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:38:41 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:41 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:41 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:41.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:41 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 05 10:38:41 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:38:41 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27554 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:41 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27784 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:41 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: cache status {prefix=cache status} (starting...)
Dec 05 10:38:41 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:41 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: client ls {prefix=client ls} (starting...)
Dec 05 10:38:41 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:41 compute-0 lvm[297556]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:38:41 compute-0 lvm[297556]: VG ceph_vg0 finished
Dec 05 10:38:41 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27799 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:41 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27566 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:42 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:42 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 05 10:38:42 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:42.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 05 10:38:42 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27578 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27814 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.27539 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.27769 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2458451645' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1641440035' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.27554 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.27784 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4070758948' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3315264088' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/80458650' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3968024101' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1494: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:42 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17613 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: damage ls {prefix=damage ls} (starting...)
Dec 05 10:38:42 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:42 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump loads {prefix=dump loads} (starting...)
Dec 05 10:38:42 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 05 10:38:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606365878' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 05 10:38:42 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:38:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 05 10:38:42 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:42 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17625 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27620 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 05 10:38:42 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:42 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:38:42 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343430683' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:38:42 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27835 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:42 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:43 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:43 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 05 10:38:43 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.27799 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.27566 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.27578 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.27814 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: pgmap v1494: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.17613 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1453043286' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1037396302' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/606365878' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1428693566' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3156018413' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1343430683' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/56748825' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/665280971' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:38:43 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:43 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:43 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:43.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:43 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17643 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27647 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27856 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 05 10:38:43 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec 05 10:38:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260780716' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 05 10:38:43 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:43 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17661 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: ops {prefix=ops} (starting...)
Dec 05 10:38:43 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 05 10:38:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:38:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 05 10:38:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:38:43 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:43.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:43 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 05 10:38:43 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278927886' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 10:38:44 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:44 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:44 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:44.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:44 compute-0 nova_compute[257087]: 2025-12-05 10:38:44.096 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 05 10:38:44 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3837474208' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1495: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.17625 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.27620 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.27835 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.17643 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.27647 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.27856 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1260780716' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/899668397' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1064769463' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3901563256' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4269104973' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3890705118' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/278927886' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3490054053' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3837474208' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17688 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: session ls {prefix=session ls} (starting...)
Dec 05 10:38:44 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk Can't run that command on an inactive MDS!
Dec 05 10:38:44 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 05 10:38:44 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769299005' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27913 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mgr[74711]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:38:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T10:38:44.592+0000 7f687e376640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:38:44 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27704 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:44 compute-0 ceph-mgr[74711]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:38:44 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T10:38:44.613+0000 7f687e376640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:38:44 compute-0 ceph-mds[96460]: mds.cephfs.compute-0.hfgtsk asok_command: status {prefix=status} (starting...)
Dec 05 10:38:44 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17706 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 05 10:38:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/703210080' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:38:45 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:45 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:45 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:45.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 05 10:38:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367316436' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.17661 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: pgmap v1495: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/643405803' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/4011187110' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3639279191' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.17688 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3621162477' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/769299005' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2853311633' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1313337604' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1740995291' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1556936519' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/703210080' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27949 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:45 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:45] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:38:45 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:45] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:38:45 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27758 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 05 10:38:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/971355504' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 05 10:38:45 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834776471' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 10:38:45 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27955 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:46 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:46 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:46.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 10:38:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117366411' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1496: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:46 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17757 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mgr[74711]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:38:46 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: 2025-12-05T10:38:46.279+0000 7f687e376640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 10:38:46 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27973 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.27913 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.27704 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.17706 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2664539433' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3122665522' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1367316436' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.27949 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3894013530' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.27758 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/971355504' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1203395860' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1834776471' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.27955 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/507484347' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3117366411' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1459871513' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: pgmap v1496: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.17757 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: from='client.27973 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27779 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 05 10:38:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1447698336' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27991 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 05 10:38:46 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165760439' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 10:38:46 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27797 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 05 10:38:47 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373205986' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28012 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:47 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:47 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:47.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 05 10:38:47 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968135546' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27821 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:47 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17805 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28033 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:47.594Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 05 10:38:47 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:47.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.27779 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1098576192' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1447698336' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2951585843' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.27991 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1165760439' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.27797 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/995388145' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/113151397' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2373205986' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.28012 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3968135546' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.27821 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/543992692' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27839 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17823 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:47 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28048 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:47 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:48 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:48 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:48 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:48 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:48.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:48 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1497: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:48 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27866 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28057 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:48 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17844 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:46.930819+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 1515520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:47.931008+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951348 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:48.931179+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:49.931321+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:50.931458+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:51.931629+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:52.931769+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 1507328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952860 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fec00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:53.931922+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:54.932048+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:55.937667+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.229444504s of 10.239780426s, submitted: 3
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:56.937881+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:57.938044+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953781 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:58.938194+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:59.938356+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:00.938499+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:01.938681+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:02.938824+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:03.938970+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:04.939101+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:05.939263+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:06.939420+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:07.939527+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:08.939651+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:09.939797+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:10.939959+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:11.940212+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:12.940381+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:13.940512+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:14.940699+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:15.940848+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:16.940970+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:17.941070+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:18.941209+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:19.941351+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:20.941530+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:21.941765+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:22.941909+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:23.942096+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:24.942416+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:25.942632+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:26.942825+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:27.942997+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:28.943218+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:29.943451+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:30.943632+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:31.943876+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:32.944119+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:33.944324+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:34.944445+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:35.944611+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:36.944864+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:37.945028+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:38.945263+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:39.945639+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:40.945773+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:41.945981+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:42.946169+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:43.946332+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:44.946446+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:45.946542+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:46.946675+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:47.946814+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:48.946943+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:49.947066+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:50.947337+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:51.947517+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:52.947644+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 1499136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:53.947779+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:54.947888+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.823249817s of 58.867816925s, submitted: 2
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:55.948110+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:57.286323+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:58.286459+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1490944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953721 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:59.286590+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 2449408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:00.286704+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:01.286831+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:02.287011+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:03.287132+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:04.287749+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:05.287867+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:06.287968+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:07.288110+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:08.288242+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:09.288347+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:10.290556+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:11.290693+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:12.290845+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:13.290958+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:14.291102+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:15.291284+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:16.291418+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:17.291529+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:18.291734+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271fe800 session 0x563f26e7eb40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f242d4800 session 0x563f272b32c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:19.291852+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:20.291994+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:21.292178+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:22.292375+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:23.292687+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:24.292839+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:25.293070+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:26.293276+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:27.293408+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:28.293592+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953649 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:29.293707+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.996955872s of 34.081897736s, submitted: 213
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:30.293869+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:31.294015+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:32.294192+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:33.294343+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953781 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:34.294504+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:35.294673+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:36.294855+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:37.295040+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:38.295181+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953190 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:39.361125+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:40.361335+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:41.361472+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.063858986s of 12.074111938s, submitted: 2
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:42.361780+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:43.363016+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:44.363184+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:45.363676+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:46.364365+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:47.364561+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 2416640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:48.364675+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:49.364895+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:50.365026+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:51.365328+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:52.365567+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:53.365879+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:54.366117+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:55.366350+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:56.366498+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:57.366619+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:58.366752+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:59.366881+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:00.366998+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 2375680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:01.367118+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:02.367332+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:03.367465+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271fec00 session 0x563f26e7f2c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271fe000 session 0x563f24066b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:04.367883+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:05.368136+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:06.368433+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:07.368589+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:08.368724+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:09.369907+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 2367488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:10.370089+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 2359296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:11.371399+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 2359296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:12.371557+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 2359296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:13.371732+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 2359296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952467 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.860660553s of 32.466499329s, submitted: 120
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:14.371890+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:15.372137+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:16.372372+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:17.372598+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:18.372753+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954111 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:19.372948+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:20.373133+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:21.373378+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:22.373608+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271ff400 session 0x563f2707c3c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271ff000 session 0x563f2639e3c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:23.373754+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954111 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:24.373963+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 2342912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:25.374145+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:26.374370+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:27.374660+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:28.374856+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954111 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:29.375116+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:30.375305+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 2334720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:31.375489+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.264575958s of 17.273166656s, submitted: 2
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:32.375702+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:33.375873+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954111 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:34.376028+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:35.376169+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:36.376305+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:37.376520+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:38.376647+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955623 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:39.376835+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:40.376952+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:41.377093+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:42.377313+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.155375481s of 11.168437958s, submitted: 3
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:43.377452+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:44.377602+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955032 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:45.377780+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:46.377851+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:47.377982+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:48.378107+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:49.378325+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955032 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:50.378473+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:51.378624+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:52.378824+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 2310144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:53.378959+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:54.379092+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:55.379265+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:56.379419+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:57.379558+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:58.379712+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:59.379845+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:00.380008+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:01.380172+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:02.380396+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:03.380563+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:04.380788+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:05.380941+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:06.381102+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:07.381296+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:08.381432+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:09.381585+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:10.381712+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:11.381857+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:12.382067+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:13.382394+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:14.382510+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:15.382681+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f23e89c00 session 0x563f23eade00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:16.382950+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:17.383142+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:18.383325+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:19.384308+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:20.384509+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:21.384887+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:22.385139+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:23.385341+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:24.385578+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954900 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:25.385725+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.550487518s of 43.558589935s, submitted: 2
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:26.385894+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:27.386067+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:28.386308+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:29.386579+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956544 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:30.386754+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:31.387001+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:32.387184+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:33.387354+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:34.387546+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956544 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:35.387695+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:36.387866+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:37.388017+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:38.388184+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.983158112s of 12.994561195s, submitted: 3
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:39.388364+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:40.388531+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:41.388722+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:42.388978+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:43.389170+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:44.389386+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:45.389535+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:46.389644+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:47.389764+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:48.389906+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 2301952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:49.390049+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:50.390222+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:51.390392+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:52.390558+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:53.390756+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:54.390943+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:55.391088+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:56.391293+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:57.391431+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:58.391550+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:59.391701+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:00.391852+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:01.392032+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:02.392286+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:03.392442+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:04.392565+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:05.392687+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:06.393007+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:07.393177+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:08.393302+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:09.393501+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:10.393678+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:11.393846+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:12.394083+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:13.394279+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:14.394447+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:15.394619+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:16.394797+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:17.394957+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:18.395112+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:19.395269+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:20.395416+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:21.395553+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:22.395756+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:23.395932+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:24.396150+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:25.396285+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:26.396442+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:27.396637+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:28.396835+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:29.397084+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:30.397313+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:31.397603+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:32.397866+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 ms_handle_reset con 0x563f271fe800 session 0x563f26e7ef00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:33.398037+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 2293760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:34.398280+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:35.398526+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:36.398661+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:37.398846+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:38.398971+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:39.399147+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955821 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:40.399329+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:41.399556+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:42.399786+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fec00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.717926025s of 63.721668243s, submitted: 1
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:43.399925+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:44.400134+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955953 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:45.400401+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:46.400570+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:47.400729+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:48.400883+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:49.401035+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958977 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:50.401224+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:51.401565+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:52.401773+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:53.401991+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:54.402209+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958386 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:55.402407+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.878764153s of 13.278193474s, submitted: 4
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:56.402546+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:57.402759+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:58.402942+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:59.403134+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:00.403367+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:01.403618+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:02.403813+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:03.404457+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:04.404630+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:05.404788+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:06.404934+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:07.405054+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:08.405175+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:09.405343+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:10.409726+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:11.409885+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:12.410102+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:13.410286+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:14.410459+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:15.410602+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:16.410737+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:17.410891+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:18.411077+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:19.411270+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:20.411431+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:21.411617+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:22.411858+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:23.412022+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:24.412181+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:25.412341+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:26.412470+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:27.412613+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:28.412736+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:29.412857+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:30.413007+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:31.413145+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:32.413324+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:33.413460+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:34.413643+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:35.437845+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:36.438010+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:37.438201+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:38.438430+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:39.438611+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:40.438754+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:41.438877+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:42.439040+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:43.439183+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:44.439318+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:45.439805+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:46.440011+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc67f000/0x0/0x4ffc00000, data 0xdef89/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:47.440169+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:48.440324+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:49.440455+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 2285568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ffc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958254 data_alloc: 218103808 data_used: 151552
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:50.440577+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 2277376 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.217182159s of 54.254943848s, submitted: 1
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:51.440730+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 2277376 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 132 ms_handle_reset con 0x563f271fec00 session 0x563f272b3680
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0xe31b5/0x193000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:52.441010+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 18898944 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 134 ms_handle_reset con 0x563f271ffc00 session 0x563f23e065a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:53.441193+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:54.443691+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 86155264 unmapped: 18669568 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 135 ms_handle_reset con 0x563f23e89c00 session 0x563f26fd74a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080728 data_alloc: 218103808 data_used: 155648
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:55.450537+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 18661376 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:56.450782+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 18661376 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fb66f000/0x0/0x4ffc00000, data 0x10e73f8/0x119b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:57.451099+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 18661376 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:58.451338+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:59.451461+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082722 data_alloc: 218103808 data_used: 155648
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:00.451621+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66d000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:01.451768+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.318701744s of 11.500458717s, submitted: 46
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:02.451987+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:03.452355+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:04.452656+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 18874368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66d000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082854 data_alloc: 218103808 data_used: 155648
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:05.452870+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:06.453018+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:07.453219+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:08.453441+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:09.453616+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083526 data_alloc: 218103808 data_used: 155648
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:10.453785+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:11.453937+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:12.454229+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:13.454460+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:14.454679+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082344 data_alloc: 218103808 data_used: 155648
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:15.454855+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:16.455007+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:17.455173+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 18866176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.933827400s of 15.960062981s, submitted: 4
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:18.455357+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 18857984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:19.455522+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 18857984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082212 data_alloc: 218103808 data_used: 155648
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:20.455679+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:21.455834+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:22.456017+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:23.456280+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:24.456506+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082212 data_alloc: 218103808 data_used: 155648
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:25.456661+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:26.456791+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 ms_handle_reset con 0x563f271ff000 session 0x563f271ead20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 ms_handle_reset con 0x563f271fe800 session 0x563f272b2b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:27.456945+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:28.457169+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:29.457353+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082212 data_alloc: 218103808 data_used: 155648
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:30.457591+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:31.457858+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:32.458074+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:33.458249+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:34.458481+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:35.458636+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082212 data_alloc: 218103808 data_used: 155648
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:36.458957+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:37.459582+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 18849792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 ms_handle_reset con 0x563f271ff400 session 0x563f24c3eb40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.366529465s of 20.370347977s, submitted: 1
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:38.459749+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 11108352 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 ms_handle_reset con 0x563f23e89c00 session 0x563f263985a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:39.459916+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 11108352 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb66e000/0x0/0x4ffc00000, data 0x10e93ca/0x119e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:40.460102+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101952 data_alloc: 218103808 data_used: 6975488
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 11108352 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:41.460255+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93724672 unmapped: 11100160 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 138 ms_handle_reset con 0x563f271ff000 session 0x563f271d8780
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:42.460463+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93724672 unmapped: 14254080 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:43.460717+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 14237696 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fafcc000/0x0/0x4ffc00000, data 0x17875f6/0x183e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:44.460886+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 14237696 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:45.461105+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157320 data_alloc: 218103808 data_used: 6975488
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 14237696 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:46.461320+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 14237696 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fafcc000/0x0/0x4ffc00000, data 0x17875f6/0x183e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:47.461483+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 14237696 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ffc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 138 ms_handle_reset con 0x563f271ffc00 session 0x563f26e18780
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _renew_subs
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.707313538s of 10.020560265s, submitted: 16
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:48.461610+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 94076928 unmapped: 13901824 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f2631f400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:49.461747+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 8994816 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:50.461902+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208862 data_alloc: 234881024 data_used: 13901824
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100474880 unmapped: 7503872 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:51.462061+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100483072 unmapped: 7495680 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:52.462355+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fafa6000/0x0/0x4ffc00000, data 0x17ad5c8/0x1865000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7487488 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:53.462541+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7487488 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:54.462688+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7487488 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:55.462860+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207299 data_alloc: 234881024 data_used: 13901824
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7487488 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:56.463007+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7487488 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:57.463147+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7479296 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fafa7000/0x0/0x4ffc00000, data 0x17ad5c8/0x1865000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:58.463311+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7479296 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:59.463482+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fafa7000/0x0/0x4ffc00000, data 0x17ad5c8/0x1865000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f271fc000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f2701eb40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100507648 unmapped: 7471104 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:00.463707+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207299 data_alloc: 234881024 data_used: 13901824
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 100507648 unmapped: 7471104 heap: 107978752 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.999906540s of 13.015459061s, submitted: 11
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:01.463994+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105619456 unmapped: 4464640 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:02.464366+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0x21085c8/0x21c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 4890624 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:03.464598+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 4808704 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:04.464876+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 4808704 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:05.465165+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295361 data_alloc: 234881024 data_used: 14913536
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 4808704 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:06.465428+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 4808704 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:07.465690+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 4808704 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:08.465909+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 4800512 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:09.466086+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 4800512 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:10.466279+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296273 data_alloc: 234881024 data_used: 14983168
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 4759552 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:11.466437+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 4759552 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:12.466632+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 4759552 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:13.466837+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 4759552 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:14.467044+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:15.467221+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296405 data_alloc: 234881024 data_used: 14983168
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:16.467489+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:17.467648+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:18.467841+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:19.468060+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.198905945s of 18.406009674s, submitted: 58
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:20.468251+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294974 data_alloc: 234881024 data_used: 14983168
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:21.468443+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa641000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:22.468674+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89c00 session 0x563f2639e000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f26b33c20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:23.468816+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f25a3dc20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 4751360 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ffc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ffc00 session 0x563f26d574a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26362c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26362c00 session 0x563f2709e3c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:24.468937+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105914368 unmapped: 4169728 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89c00 session 0x563f23e04d20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa640000/0x0/0x4ffc00000, data 0x21135d8/0x21cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,1,2,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f271d85a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26362c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:25.469080+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26362c00 session 0x563f25a3d680
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f26d46b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ffc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ffc00 session 0x563f26d46d20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89c00 session 0x563f26d46f00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367848 data_alloc: 234881024 data_used: 15507456
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 19677184 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:26.469403+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 19677184 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:27.469601+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 19677184 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:28.469942+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c6a000/0x0/0x4ffc00000, data 0x2ae95d8/0x2ba2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f26d472c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 19644416 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:29.470133+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 19644416 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:30.470322+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26362c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26362c00 session 0x563f26d47680
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367848 data_alloc: 234881024 data_used: 15507456
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 19644416 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f26d47860
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ffc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:31.470503+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.197183609s of 11.661909103s, submitted: 10
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ffc00 session 0x563f26d47a40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 20185088 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:32.470713+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c69000/0x0/0x4ffc00000, data 0x2ae95fb/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 20152320 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:33.470971+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105709568 unmapped: 20119552 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:34.471145+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115228672 unmapped: 10600448 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:35.471293+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439221 data_alloc: 234881024 data_used: 25825280
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c69000/0x0/0x4ffc00000, data 0x2ae95fb/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 10567680 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:36.471458+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 10567680 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:37.472300+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 10567680 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:38.472423+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 10567680 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c69000/0x0/0x4ffc00000, data 0x2ae95fb/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:39.473038+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 10534912 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:40.473513+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439221 data_alloc: 234881024 data_used: 25825280
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 10534912 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:41.473691+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 10534912 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:42.473871+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c69000/0x0/0x4ffc00000, data 0x2ae95fb/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 10502144 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:43.474069+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9c69000/0x0/0x4ffc00000, data 0x2ae95fb/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 10502144 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:44.474347+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.484658241s of 13.498271942s, submitted: 4
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 7168000 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:45.474606+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466331 data_alloc: 234881024 data_used: 25829376
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 7168000 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:46.474949+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 7168000 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff400 session 0x563f271fc3c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe800 session 0x563f271ebc20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:47.475120+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 6815744 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:48.475291+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 7397376 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:49.475407+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8712000/0x0/0x4ffc00000, data 0x2ea05fb/0x2f5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 7397376 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:50.475520+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1478667 data_alloc: 234881024 data_used: 26726400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 7397376 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:51.475787+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8712000/0x0/0x4ffc00000, data 0x2ea05fb/0x2f5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7364608 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:52.476027+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7364608 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:53.476193+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7364608 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:54.476346+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8712000/0x0/0x4ffc00000, data 0x2ea05fb/0x2f5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 7331840 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:55.476513+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1478667 data_alloc: 234881024 data_used: 26726400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 7331840 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:56.476683+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8712000/0x0/0x4ffc00000, data 0x2ea05fb/0x2f5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 7331840 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:57.477030+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89c00 session 0x563f25a3d0e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 7331840 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.281052589s of 13.395196915s, submitted: 38
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:58.477186+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 15196160 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:59.477375+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f272081e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:00.477642+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f94a0000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303085 data_alloc: 234881024 data_used: 15507456
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:01.477930+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:02.478219+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:03.478533+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:04.478696+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:05.478802+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303085 data_alloc: 234881024 data_used: 15507456
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:06.479010+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f94a0000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:07.479192+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f2631f400 session 0x563f2707cb40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25ea2c00 session 0x563f26e18b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 15122432 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.288631439s of 10.002907753s, submitted: 27
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f94a0000/0x0/0x4ffc00000, data 0x21135c8/0x21cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:08.479399+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89c00 session 0x563f271fc5a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 20455424 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:09.479661+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 20455424 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:10.479821+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127863 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 20455424 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:11.479998+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 20455424 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:12.480205+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 20455424 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:13.480400+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:14.480536+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:15.480802+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:16.481016+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:17.481177+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:18.481347+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:19.481557+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:20.481711+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:21.481849+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:22.482046+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:23.482167+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:24.482327+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:25.482495+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:26.482653+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:27.482743+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:28.482888+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:29.482971+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:30.483085+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:31.483268+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:32.483477+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:33.483594+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:34.483768+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:35.483923+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:36.484040+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:37.484171+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:38.484293+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:39.484453+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:40.484568+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127731 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:41.484672+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:42.484877+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:43.485041+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 20447232 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.049350739s of 36.060447693s, submitted: 3
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:44.485138+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f26774780
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 26173440 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:45.485319+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181881 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 26173440 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:46.485461+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 26173440 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:47.485625+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 26173440 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:48.485826+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d43000/0x0/0x4ffc00000, data 0x18715c8/0x1929000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104914944 unmapped: 26173440 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:49.486004+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe800 session 0x563f26d463c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f2639e3c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 26107904 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:50.486138+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183518 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 26107904 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:51.486307+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:52.486475+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:53.486598+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d42000/0x0/0x4ffc00000, data 0x18715eb/0x192a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:54.486793+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:55.486941+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236870 data_alloc: 234881024 data_used: 15372288
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:56.487110+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:57.487382+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:58.487583+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d42000/0x0/0x4ffc00000, data 0x18715eb/0x192a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:59.487747+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d42000/0x0/0x4ffc00000, data 0x18715eb/0x192a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:00.487888+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236870 data_alloc: 234881024 data_used: 15372288
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:01.488058+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.800022125s of 17.457147598s, submitted: 10
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:02.488319+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d42000/0x0/0x4ffc00000, data 0x18715eb/0x192a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:03.488457+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 24829952 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d42000/0x0/0x4ffc00000, data 0x18715eb/0x192a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:04.488628+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 20545536 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:05.488809+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 20545536 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287764 data_alloc: 234881024 data_used: 15491072
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:06.488936+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 21200896 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.9 total, 600.0 interval
                                           Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2676 syncs, 3.83 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1133 writes, 3083 keys, 1133 commit groups, 1.0 writes per commit group, ingest: 2.58 MB, 0.00 MB/s
                                           Interval WAL: 1133 writes, 503 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:07.489081+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 21200896 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:08.489255+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 21200896 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:09.489995+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:10.490546+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288676 data_alloc: 234881024 data_used: 15749120
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:11.490949+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:12.491321+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:13.491581+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:14.491841+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:15.492079+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.498682976s of 13.849031448s, submitted: 41
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288544 data_alloc: 234881024 data_used: 15749120
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:16.492296+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:17.492825+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:18.493143+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:19.493586+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:20.493859+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288544 data_alloc: 234881024 data_used: 15749120
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:21.494316+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:22.494758+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:23.495151+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:24.495363+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:25.495658+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288544 data_alloc: 234881024 data_used: 15749120
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:26.495873+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1ebc5eb/0x1f75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:27.496122+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f23dd9680
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 21110784 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.286449432s of 12.293078423s, submitted: 1
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25d65860
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:28.496391+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:29.496582+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:30.496744+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136804 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:31.496925+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:32.497208+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:33.497480+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:34.497670+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:35.497815+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136804 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:36.498006+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:37.498282+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:38.498481+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:39.498691+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:40.498854+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136804 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:41.498992+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:42.499144+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:43.499296+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:44.499601+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:45.499767+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136804 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:46.500012+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:47.500502+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:48.500780+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:49.501205+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:50.501396+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136804 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:51.501715+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:52.502024+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:53.502286+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:54.502475+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:55.502755+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 26607616 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.865468979s of 27.915294647s, submitted: 19
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25ea2c00 session 0x563f2615af00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe800 session 0x563f25db4960
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff400 session 0x563f24c3ef00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25db2b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25ea2c00 session 0x563f23e041e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172653 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:56.503044+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:57.503302+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:58.503516+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:59.503737+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:00.503913+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172653 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:01.504168+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:02.504413+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe800 session 0x563f25db5a40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:03.504566+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f267750e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:04.504714+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26362c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26362c00 session 0x563f24c305a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f24c50b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104611840 unmapped: 26476544 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:05.504863+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26468352 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174718 data_alloc: 218103808 data_used: 7532544
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:06.505076+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 26140672 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:07.505262+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:08.505503+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:09.505673+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:10.505843+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206638 data_alloc: 234881024 data_used: 12292096
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:11.506072+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:12.506321+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:13.506492+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa01d000/0x0/0x4ffc00000, data 0x15975c8/0x164f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:14.506627+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:15.506844+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:16.507041+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206638 data_alloc: 234881024 data_used: 12292096
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 25157632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.268453598s of 21.364921570s, submitted: 22
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:17.507215+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 19488768 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:18.507442+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21020672 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:19.507606+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x1c265c8/0x1cde000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 20979712 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:20.507851+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 20930560 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:21.508060+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274556 data_alloc: 234881024 data_used: 12996608
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 20889600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:22.508320+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 20889600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:23.508476+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 20889600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f997c000/0x0/0x4ffc00000, data 0x1c385c8/0x1cf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:24.508806+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 20889600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:25.508967+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 20889600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:26.509114+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274876 data_alloc: 234881024 data_used: 13004800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:27.509311+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:28.509418+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f997c000/0x0/0x4ffc00000, data 0x1c385c8/0x1cf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:29.509570+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:30.509745+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:31.509912+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275788 data_alloc: 234881024 data_used: 13074432
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:32.510129+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:33.510413+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f997c000/0x0/0x4ffc00000, data 0x1c385c8/0x1cf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:34.510587+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:35.510734+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:36.510898+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275788 data_alloc: 234881024 data_used: 13074432
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:37.511075+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 20848640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:38.511336+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f997c000/0x0/0x4ffc00000, data 0x1c385c8/0x1cf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 20815872 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:39.511553+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 20815872 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:40.511745+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 20815872 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:41.511913+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275788 data_alloc: 234881024 data_used: 13074432
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 20815872 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:42.512151+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f27085860
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f24c3d680
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 20144128 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:43.512314+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f997c000/0x0/0x4ffc00000, data 0x1c385c8/0x1cf0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f24c56800 session 0x563f249e9e00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d89c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 20037632 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f24c56800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.912588120s of 26.871160507s, submitted: 58
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f24c56800 session 0x563f23370d20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:44.512451+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e88000 session 0x563f23ff6d20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e88000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:45.512645+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:46.512961+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287814 data_alloc: 234881024 data_used: 13074432
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:47.513167+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d555c8/0x1e0d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:48.513360+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d555c8/0x1e0d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f24c56800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f24c56800 session 0x563f24c31a40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:49.513485+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:50.513596+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 21929984 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d555c8/0x1e0d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:51.513800+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f24c30960
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985f000/0x0/0x4ffc00000, data 0x1d555c8/0x1e0d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288118 data_alloc: 234881024 data_used: 13074432
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985d000/0x0/0x4ffc00000, data 0x1d565c8/0x1e0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109142016 unmapped: 21946368 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:52.514034+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f24c30000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f25de3a40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109142016 unmapped: 21946368 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:53.514174+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 109142016 unmapped: 21946368 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:54.514298+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.773413658s of 10.834179878s, submitted: 11
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 23470080 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:55.514414+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d565d8/0x1e0f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108871680 unmapped: 22216704 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:56.514532+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295780 data_alloc: 234881024 data_used: 13955072
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 22126592 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:57.514680+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 22118400 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:58.514819+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 22118400 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:59.514965+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 22110208 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:00.515122+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 22110208 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:01.515344+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f944d000/0x0/0x4ffc00000, data 0x1d565d8/0x1e0f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295780 data_alloc: 234881024 data_used: 13955072
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 22102016 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:02.515725+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 22102016 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:03.516010+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 22102016 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f944d000/0x0/0x4ffc00000, data 0x1d565d8/0x1e0f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:04.516194+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 22102016 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:05.516367+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 22102016 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.720484734s of 11.360827446s, submitted: 234
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:06.516517+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336510 data_alloc: 234881024 data_used: 13955072
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f944d000/0x0/0x4ffc00000, data 0x1d565d8/0x1e0f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 16007168 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:07.516657+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 16580608 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:08.516897+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 16572416 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:09.517103+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 16572416 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:10.517630+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 16572416 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d7f000/0x0/0x4ffc00000, data 0x24245d8/0x24dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:11.517788+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354850 data_alloc: 234881024 data_used: 15097856
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d7f000/0x0/0x4ffc00000, data 0x24245d8/0x24dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 16572416 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:12.518019+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d7f000/0x0/0x4ffc00000, data 0x24245d8/0x24dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 16695296 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:13.518187+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 16695296 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:14.518363+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 16695296 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:15.518531+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f25de21e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f25de3860
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.546108246s of 10.014736176s, submitted: 83
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:16.518652+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f2639fc20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d5e000/0x0/0x4ffc00000, data 0x24455d8/0x24fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280789 data_alloc: 234881024 data_used: 13074432
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:17.518786+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c395c8/0x1cf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:18.518925+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:19.519113+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:20.519301+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c395c8/0x1cf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:21.519516+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280789 data_alloc: 234881024 data_used: 13074432
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 17342464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:22.519800+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25ea2c00 session 0x563f2707c960
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe800 session 0x563f23dda960
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f24c56800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 19415040 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c395c8/0x1cf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:23.519950+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f24c56800 session 0x563f23dd9c20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:24.520162+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:25.520388+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:26.520644+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:27.520892+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:28.521136+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:29.521359+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:30.521509+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 19398656 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:31.521712+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:32.521887+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:33.522067+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:34.522337+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:35.522518+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:36.522702+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:37.522856+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 19390464 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:38.523114+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:39.523302+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:40.523531+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:41.523749+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:42.523981+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:43.524167+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:44.524361+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe000 session 0x563f25d62960
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f2639e960
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:45.524573+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d58800 session 0x563f249e9a40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:46.524761+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 19382272 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.601390839s of 30.812540054s, submitted: 44
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:47.524949+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111181824 unmapped: 19906560 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25ea3000 session 0x563f24d214a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d58800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:48.525124+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 19865600 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:49.525355+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 19824640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:50.525560+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 19824640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:51.525753+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 19824640 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153699 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:52.525910+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 19816448 heap: 131088384 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f26b33e00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26b325a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:53.526052+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 23322624 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:54.526285+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 23322624 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:55.526452+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 23322624 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25db3a40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26d46d20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:56.527006+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 23314432 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182372 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f23dd9e00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f26d46b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:57.527162+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 23625728 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.542368889s of 10.452962875s, submitted: 147
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x14b962a/0x1572000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:58.527350+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 23625728 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:59.527495+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 23609344 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:00.527627+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 23609344 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:01.528017+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 23609344 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210105 data_alloc: 234881024 data_used: 11472896
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:02.528272+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 23609344 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:03.528452+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 23609344 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x14b962a/0x1572000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:04.528722+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 23601152 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:05.528900+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 23601152 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:06.529091+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 23601152 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210105 data_alloc: 234881024 data_used: 11472896
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:07.529334+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 23601152 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x14b962a/0x1572000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:08.529567+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 23601152 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:09.529797+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.902854919s of 11.906520844s, submitted: 1
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 23592960 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:10.529934+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 23027712 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:11.530512+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 23027712 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251197 data_alloc: 234881024 data_used: 11472896
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:12.530694+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 23019520 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:13.530836+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:14.531139+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:15.531333+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:16.531532+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:17.531692+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:18.531878+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:19.532074+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 23863296 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:20.532423+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:21.532608+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:22.534401+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:23.534563+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:24.534697+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:25.534842+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 23855104 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:26.535141+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:27.535390+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:28.536604+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:29.536965+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:30.537873+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:31.538004+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:32.538388+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:33.538935+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:34.539592+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:35.539779+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:36.539949+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:37.540094+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:38.540268+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:39.540432+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:40.540569+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:41.540702+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252459 data_alloc: 234881024 data_used: 11472896
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:42.540875+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:43.541058+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x1a5062a/0x1b09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 23846912 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.127941132s of 34.422428131s, submitted: 37
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:44.541251+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 23838720 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:45.541430+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 23838720 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:46.548119+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 23838720 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252763 data_alloc: 234881024 data_used: 11472896
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9751000/0x0/0x4ffc00000, data 0x1a5162a/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:47.548312+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 23322624 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f25de21e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff800 session 0x563f271eab40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f23370d20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f24c50b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f25de3a40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:48.548498+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 23306240 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:49.548662+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 23306240 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f25db21e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:50.548825+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 23306240 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e89800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e89800 session 0x563f263981e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:51.549051+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26fd7860
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 23314432 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2707cb40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299412 data_alloc: 234881024 data_used: 11476992
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f91fa000/0x0/0x4ffc00000, data 0x1fa864d/0x2062000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:52.549295+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 23363584 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:53.549486+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113483776 unmapped: 21282816 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:54.549668+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f91fa000/0x0/0x4ffc00000, data 0x1fa864d/0x2062000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114458624 unmapped: 20307968 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:55.549860+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114458624 unmapped: 20307968 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:56.550048+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114458624 unmapped: 20307968 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337676 data_alloc: 234881024 data_used: 16990208
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:57.550224+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 20299776 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:58.550470+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 20299776 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:59.550650+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 20299776 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:00.550786+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f91fa000/0x0/0x4ffc00000, data 0x1fa864d/0x2062000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 20299776 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:01.550923+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 20291584 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337676 data_alloc: 234881024 data_used: 16990208
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:02.551121+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 20291584 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:03.551260+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 20291584 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:04.551377+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.942338943s of 20.544075012s, submitted: 30
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f91fa000/0x0/0x4ffc00000, data 0x1fa864d/0x2062000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,2])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 15704064 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:05.551528+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 14262272 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:06.551685+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 14278656 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f896f000/0x0/0x4ffc00000, data 0x283364d/0x28ed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409154 data_alloc: 234881024 data_used: 17223680
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:07.551811+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8940000/0x0/0x4ffc00000, data 0x286264d/0x291c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120537088 unmapped: 14229504 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:08.551993+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120537088 unmapped: 14229504 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:09.552143+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8940000/0x0/0x4ffc00000, data 0x286264d/0x291c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 14221312 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:10.552282+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 14221312 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:11.552427+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 14180352 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406866 data_alloc: 234881024 data_used: 17223680
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:12.552606+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 14180352 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:13.552742+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 14180352 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:14.552950+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f893d000/0x0/0x4ffc00000, data 0x286564d/0x291f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 14180352 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:15.553095+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 14172160 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:16.553296+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 14172160 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406866 data_alloc: 234881024 data_used: 17223680
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:17.553454+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 14172160 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:18.553628+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f893d000/0x0/0x4ffc00000, data 0x286564d/0x291f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 14172160 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.714754105s of 14.909518242s, submitted: 90
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f26fd72c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f249e8b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:19.553772+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8e400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 19021824 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8e400 session 0x563f263992c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:20.553952+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:21.554133+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263838 data_alloc: 234881024 data_used: 11476992
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:22.554329+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9751000/0x0/0x4ffc00000, data 0x1a5162a/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:23.554490+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:24.554668+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:25.554860+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9751000/0x0/0x4ffc00000, data 0x1a5162a/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:26.555186+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263838 data_alloc: 234881024 data_used: 11476992
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:27.555580+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 19013632 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:28.556055+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:29.556316+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9751000/0x0/0x4ffc00000, data 0x1a5162a/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:30.556595+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:31.556722+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263838 data_alloc: 234881024 data_used: 11476992
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:32.556889+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:33.557081+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 19005440 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9751000/0x0/0x4ffc00000, data 0x1a5162a/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:34.557266+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.300795555s of 15.466802597s, submitted: 51
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe000 session 0x563f25db34a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26d47e00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 18997248 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:35.557410+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 18997248 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:36.557698+0000)
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.17805 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171956 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:37.557916+0000)
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.28033 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f25d64f00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1162541838' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:38.558092+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:39.558342+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:40.558532+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:41.558729+0000)
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3270332070' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.27839 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171364 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.17823 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.28048 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:42.558969+0000)
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2355639283' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2857029444' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:43.559185+0000)
Dec 05 10:38:48 compute-0 ceph-mon[74418]: pgmap v1497: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.27866 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:44.559392+0000)
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1125393803' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.28057 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.17844 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/345044824' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:45.559600+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:46.559794+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171364 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:47.559956+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:48.560096+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:49.560309+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:50.560499+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:51.560669+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171364 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:52.560935+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:53.561143+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:54.561328+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:55.561581+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:56.561767+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171364 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:57.561982+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 22126592 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: mgrc ms_handle_reset ms_handle_reset con 0x563f271fe400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/17115915
Dec 05 10:38:48 compute-0 ceph-osd[82677]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/17115915,v1:192.168.122.100:6801/17115915]
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: get_auth_request con 0x563f271ff800 auth_method 0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: mgrc handle_mgr_configure stats_period=5
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:58.562225+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:59.562515+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:00.562668+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:01.562873+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171364 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:02.563081+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:03.563188+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:04.563380+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:05.563567+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 22044672 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:06.563666+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9cea000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2615a3c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b2c00 session 0x563f25db3860
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f23dd8f00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112730112 unmapped: 22036480 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f27208f00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.416463852s of 32.573806763s, submitted: 31
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173122 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:07.563808+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f25de32c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe000 session 0x563f2701f2c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:08.563970+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x149e62a/0x1557000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:09.564157+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:10.564340+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:11.564521+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208357 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:12.564980+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271ff000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271ff000 session 0x563f2701e780
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:13.565100+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x149e62a/0x1557000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f2701e000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:14.566702+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2701ed20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 21839872 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:15.566799+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 21905408 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:16.566941+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 21905408 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:17.567102+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212353 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 21905408 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:18.567313+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.259483337s of 11.413383484s, submitted: 40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26774d20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 21905408 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:19.567491+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 21905408 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:20.567668+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:21.567853+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:22.568076+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237597 data_alloc: 234881024 data_used: 11083776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:23.568287+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:24.568439+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:25.568726+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:26.568879+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:27.569064+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237597 data_alloc: 234881024 data_used: 11083776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f23e04960
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe000 session 0x563f267754a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 21626880 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:28.569369+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.971426010s of 10.051115036s, submitted: 4
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9d03000/0x0/0x4ffc00000, data 0x149e65d/0x1559000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26fd7680
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:29.569570+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:30.569763+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:31.570404+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:32.571010+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179220 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:33.571495+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:34.571909+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:35.572323+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:36.572508+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:37.572722+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179220 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:38.572886+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 23683072 heap: 134766592 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.104510307s of 10.235854149s, submitted: 39
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25d63860
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f25db2b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f272b2d20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f271fe000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f271fe000 session 0x563f23eae780
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:39.573086+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26d57680
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 26804224 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:40.573336+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 26804224 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:41.574374+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 26804224 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:42.575144+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9b23000/0x0/0x4ffc00000, data 0x16815c8/0x1739000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 26804224 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221851 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:43.575369+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 26804224 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:44.575604+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 26771456 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9b23000/0x0/0x4ffc00000, data 0x16815c8/0x1739000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:45.575840+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 24805376 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:46.576053+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 24805376 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:47.576262+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 24805376 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261351 data_alloc: 234881024 data_used: 13336576
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2707c000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f272b2960
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:48.576396+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:49.576573+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:50.576775+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:51.576888+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:52.577181+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:53.577451+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:54.577682+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:55.577930+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:56.578126+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:57.578321+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:58.578500+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:59.578658+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:00.578810+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:01.578955+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:02.579292+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:03.579916+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:04.580516+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:05.581677+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:06.581910+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:07.582100+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:08.582490+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:09.582780+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:10.582985+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:11.583396+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:12.583655+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 26566656 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:13.583980+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 26558464 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:14.584216+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 26558464 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:15.610026+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 26558464 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:16.610436+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 26558464 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:17.610668+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182525 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 26558464 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f25d65a40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e44000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e44000 session 0x563f270850e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f25a5f4a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:18.610875+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25a5fa40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.391620636s of 39.564292908s, submitted: 35
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 26288128 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f23e070e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f25d9be00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e44400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e44400 session 0x563f24064b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f271d9680
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f25de2d20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:19.611320+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 26279936 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:20.611547+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 26279936 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:21.611712+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 26279936 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:22.611951+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194344 data_alloc: 218103808 data_used: 7503872
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 26279936 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:23.612321+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 26279936 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:24.612660+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 26271744 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:25.612851+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 26271744 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:26.613044+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 26271744 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:27.613222+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194344 data_alloc: 218103808 data_used: 7503872
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 26271744 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:28.613481+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 26271744 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.724584579s of 10.777653694s, submitted: 13
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:29.613655+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111493120 unmapped: 26427392 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:30.613889+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111493120 unmapped: 26427392 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:31.614121+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111493120 unmapped: 26427392 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:32.614335+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198428 data_alloc: 218103808 data_used: 8060928
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 26419200 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:33.614602+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 26419200 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:34.614915+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 26419200 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:35.615070+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 26419200 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:36.615606+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:37.615771+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198428 data_alloc: 218103808 data_used: 8060928
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:38.615922+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:39.616043+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:40.616197+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x118d5d8/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:41.616338+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 111509504 unmapped: 26411008 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.636252403s of 12.641905785s, submitted: 1
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:42.616550+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267216 data_alloc: 218103808 data_used: 8052736
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113926144 unmapped: 23994368 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:43.616726+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 23904256 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:44.616899+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 23904256 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:45.617027+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 23904256 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1ab45d8/0x1b6d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:46.617220+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:47.617398+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270282 data_alloc: 218103808 data_used: 8052736
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1abc5d8/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:48.617543+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:49.617744+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:50.617903+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:51.618080+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 25141248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1abc5d8/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:52.618478+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.704484940s of 10.629067421s, submitted: 52
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270298 data_alloc: 218103808 data_used: 8052736
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 25116672 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:53.618674+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 25116672 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:54.618911+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 25116672 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1abc5d8/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:55.619068+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:56.619266+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:57.619541+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270298 data_alloc: 218103808 data_used: 8052736
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:58.619694+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:59.619879+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1abc5d8/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:00.620059+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:01.620302+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:02.620559+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270298 data_alloc: 218103808 data_used: 8052736
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 25108480 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f96e7000/0x0/0x4ffc00000, data 0x1abc5d8/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:03.620710+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 25100288 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:04.620851+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.854027748s of 11.854028702s, submitted: 0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26d57860
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 25272320 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:05.621042+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 25272320 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:06.621288+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 25272320 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:07.621502+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 25272320 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:08.622386+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 25272320 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:09.622549+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f26b525a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:10.622864+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:11.623276+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:12.623744+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:13.623963+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:14.624193+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:15.624321+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:16.624618+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:17.624817+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:18.625119+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:19.625291+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:20.625630+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:21.625837+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:22.626220+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:23.626528+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:24.626750+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:25.626928+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:26.627102+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:27.627319+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:28.627570+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:29.627750+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:30.627950+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:31.628090+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:32.628479+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:33.628662+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:34.628886+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:35.629327+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:36.629502+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:37.629650+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:38.629895+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:39.630077+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:40.630334+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25288704 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:41.630545+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 25280512 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:42.630764+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192120 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 25280512 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:43.630895+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 25280512 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:44.631103+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 25280512 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:45.631304+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e44800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.238075256s of 40.941547394s, submitted: 27
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e44800 session 0x563f24065c20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:46.631506+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:47.632215+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235618 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:48.632458+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:49.632669+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f240652c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:50.632865+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9b07000/0x0/0x4ffc00000, data 0x169d5c8/0x1755000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f23ddba40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:51.633054+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9b07000/0x0/0x4ffc00000, data 0x169d5c8/0x1755000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:52.633223+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235618 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 24977408 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9b07000/0x0/0x4ffc00000, data 0x169d5c8/0x1755000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:53.633448+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26b325a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f26fd6b40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 24821760 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:54.633606+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e44c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 24821760 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e45000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:55.633797+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:56.633997+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:57.634147+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278310 data_alloc: 234881024 data_used: 13369344
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:58.634311+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ae3000/0x0/0x4ffc00000, data 0x16c15c8/0x1779000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:59.634473+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ae3000/0x0/0x4ffc00000, data 0x16c15c8/0x1779000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:00.634642+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:01.634849+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:02.635094+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278310 data_alloc: 234881024 data_used: 13369344
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ae3000/0x0/0x4ffc00000, data 0x16c15c8/0x1779000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:03.635276+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:04.635435+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:05.635548+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 23093248 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:06.635712+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.261785507s of 21.314563751s, submitted: 7
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ae3000/0x0/0x4ffc00000, data 0x16c15c8/0x1779000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 19816448 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:07.635860+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f91c1000/0x0/0x4ffc00000, data 0x1fe35c8/0x209b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348486 data_alloc: 234881024 data_used: 13598720
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 20684800 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:08.636057+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 20480000 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:09.636372+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117522432 unmapped: 20398080 heap: 137920512 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:10.636518+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e45c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 22855680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e45c00 session 0x563f272090e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:11.637377+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8ad0000/0x0/0x4ffc00000, data 0x26d45c8/0x278c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 22855680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:12.637625+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401384 data_alloc: 234881024 data_used: 13598720
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 22855680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:13.637781+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 22847488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:14.638037+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:15.638294+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 22847488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8ad0000/0x0/0x4ffc00000, data 0x26d45c8/0x278c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:16.638548+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 22839296 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:17.638771+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 22839296 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402104 data_alloc: 234881024 data_used: 13598720
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8acd000/0x0/0x4ffc00000, data 0x26d75c8/0x278f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:18.639008+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 22839296 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:19.639305+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 22839296 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.501544952s of 12.911491394s, submitted: 75
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f27084780
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:20.639557+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118775808 unmapped: 22822912 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:21.639876+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 121044992 unmapped: 20553728 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:22.640224+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8acd000/0x0/0x4ffc00000, data 0x26d75c8/0x278f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451237 data_alloc: 234881024 data_used: 20717568
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:23.640545+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8acd000/0x0/0x4ffc00000, data 0x26d75c8/0x278f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:24.640764+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:25.640965+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8acd000/0x0/0x4ffc00000, data 0x26d75c8/0x278f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:26.641171+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:27.641404+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451237 data_alloc: 234881024 data_used: 20717568
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:28.641582+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:29.641841+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:30.642064+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:31.642313+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17309696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8acd000/0x0/0x4ffc00000, data 0x26d75c8/0x278f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.388243675s of 12.616801262s, submitted: 6
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:32.642470+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124977152 unmapped: 16621568 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472753 data_alloc: 234881024 data_used: 20783104
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:33.642700+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:34.642958+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:35.643140+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:36.643378+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f6000/0x0/0x4ffc00000, data 0x29ae5c8/0x2a66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:37.643649+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476853 data_alloc: 234881024 data_used: 20783104
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:38.643830+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 16482304 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:39.644104+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 16449536 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:40.644328+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 16449536 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:41.644557+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:42.644902+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.793564320s of 10.690699577s, submitted: 32
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476061 data_alloc: 234881024 data_used: 20783104
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:43.645057+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:44.645390+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:45.645555+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:46.646198+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 17219584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:47.647116+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476061 data_alloc: 234881024 data_used: 20783104
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:48.647953+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:49.648671+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:50.648856+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:51.649327+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:52.649933+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 17211392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476061 data_alloc: 234881024 data_used: 20783104
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:53.650393+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 17203200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:54.650848+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 17203200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:55.651080+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:56.651381+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:57.651725+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:58.652027+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476061 data_alloc: 234881024 data_used: 20783104
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:59.652391+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:00.652665+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:01.652937+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:02.653263+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:03.653509+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476061 data_alloc: 234881024 data_used: 20783104
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f2615af00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.738149643s of 20.738151550s, submitted: 0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17195008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f26b32780
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:04.654038+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 17186816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:05.654383+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 17186816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f87f3000/0x0/0x4ffc00000, data 0x29b15c8/0x2a69000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:06.654518+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f271d9e00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 21594112 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:07.655281+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 21594112 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:08.655661+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360097 data_alloc: 234881024 data_used: 13598720
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9197000/0x0/0x4ffc00000, data 0x200d5c8/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 21594112 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:09.655899+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e45000 session 0x563f26b53a40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e44c00 session 0x563f26fd72c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 21594112 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26b32f00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:10.656385+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:11.656635+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:12.656844+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:13.657031+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:14.657172+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:15.657323+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:16.658210+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:17.658439+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:18.658639+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:19.659067+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:20.659313+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:21.659498+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:22.660396+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27887 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:23.660552+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:24.660931+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:25.661223+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:26.661491+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:27.661775+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:28.662382+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:29.663022+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:30.663846+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:31.664158+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:32.664675+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:33.665203+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:34.665664+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:35.665876+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:36.666163+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:37.666432+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:38.666693+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207153 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 25280512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f26fd7c20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f26d572c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f26a8f400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f26a8f400 session 0x563f23ddba40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f26b323c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.104686737s of 35.593875885s, submitted: 33
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:39.666977+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2707c5a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f25db43c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f28e44c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f28e44c00 session 0x563f2701e1e0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f284ee400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f284ee400 session 0x563f26b53e00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f2701f4a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:40.667203+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:41.667453+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f284ef800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f284ef800 session 0x563f24d21860
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:42.667714+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:43.668026+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f284efc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f284efc00 session 0x563f27085860
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227580 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:44.668339+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f270852c0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f262b3400 session 0x563f23dda5a0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:45.668562+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:46.668876+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f242d4800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:47.669105+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 25403392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:48.669440+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236984 data_alloc: 218103808 data_used: 8843264
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 25747456 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:49.669691+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:50.669907+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:51.670087+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:52.670312+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:53.670523+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236984 data_alloc: 218103808 data_used: 8843264
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:54.670768+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:55.671060+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:56.671329+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:57.671506+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:58.671660+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x12655c8/0x131d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236984 data_alloc: 218103808 data_used: 8843264
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25739264 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.522546768s of 19.943984985s, submitted: 20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:59.671855+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118030336 unmapped: 23568384 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:00.672035+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 22528000 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:01.672554+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 21397504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:02.672830+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 120209408 unmapped: 21389312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x182c5c8/0x18e4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,4,2])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:03.672985+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292212 data_alloc: 234881024 data_used: 9801728
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:04.673185+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:05.673340+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:06.673619+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.9 total, 600.0 interval
                                           Cumulative writes: 12K writes, 44K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3681 syncs, 3.41 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2310 writes, 7161 keys, 2310 commit groups, 1.0 writes per commit group, ingest: 7.02 MB, 0.01 MB/s
                                           Interval WAL: 2310 writes, 1005 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:07.673820+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:08.674069+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292380 data_alloc: 234881024 data_used: 9805824
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:09.674332+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.421975136s of 10.311377525s, submitted: 62
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:10.674570+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:11.674764+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:12.675004+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:13.675202+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292396 data_alloc: 234881024 data_used: 9805824
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:14.675450+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:15.675650+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x18345c8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:16.675818+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:17.675999+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 22806528 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:18.676328+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292396 data_alloc: 234881024 data_used: 9805824
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f242d4800 session 0x563f25db2d20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 22798336 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d1fc00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:19.676489+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d1fc00 session 0x563f2707c960
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:20.676718+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:21.676985+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:22.677298+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:23.677524+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:24.677668+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:25.677861+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:26.678053+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:27.678188+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:28.678346+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:29.678512+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:30.678682+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:31.678865+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:32.679083+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:33.679241+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:34.679449+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:35.679672+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:36.679857+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:37.680022+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:38.680222+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:39.680453+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:40.680665+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:41.680891+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:42.681119+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:43.681324+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:44.681497+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:45.681754+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:46.681928+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:47.682117+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:48.682313+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:49.682499+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:50.682651+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:51.682822+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:52.683055+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:53.683250+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:54.683429+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:55.683607+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:56.683881+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:57.684151+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:58.684375+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:59.684630+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:00.684868+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:01.685028+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:02.685261+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:03.685418+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:04.685588+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:05.685807+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:06.686148+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:07.686455+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:08.686693+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:09.686902+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:10.687133+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:11.687316+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:12.687633+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24485888 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:13.687816+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:14.688025+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:15.688222+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:16.688481+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:17.688644+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:18.688858+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:19.689013+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:20.689189+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:21.689387+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:22.689648+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:23.689818+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:24.689995+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24477696 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:25.690169+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:26.690384+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:27.690522+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:28.690694+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:29.690864+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:30.691001+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:31.691136+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:32.691384+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:33.691659+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:34.691841+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:35.691968+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:36.692143+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:37.692457+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:38.692620+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:39.692773+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:40.692943+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:41.693111+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:42.693298+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:43.693440+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:44.693577+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:45.693780+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:46.693986+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:47.694159+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:48.694315+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:49.694571+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:50.694760+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:51.694991+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:52.695196+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:53.695348+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:54.695510+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 105.019371033s of 105.452400208s, submitted: 28
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:55.695642+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24412160 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:56.695779+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24412160 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:57.695921+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0b5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:58.696082+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:59.696265+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 24281088 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:00.696401+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:01.696549+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 24281088 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'config diff' '{prefix=config diff}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'config show' '{prefix=config show}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:02.696760+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24526848 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:03.696926+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24805376 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:04.697104+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24494080 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'log dump' '{prefix=log dump}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:05.697294+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24494080 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'perf dump' '{prefix=perf dump}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'perf schema' '{prefix=perf schema}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:06.697461+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:07.697634+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24469504 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:08.697867+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:09.698018+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:10.698213+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:11.767969+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:12.768168+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:13.768376+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:14.783314+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:15.784453+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24461312 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:16.784630+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:17.784840+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:18.785092+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:19.785267+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:20.785411+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:21.785624+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:22.785888+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24453120 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:23.786034+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24444928 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:24.786174+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24444928 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:25.786382+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24444928 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:26.786522+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24444928 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:27.786689+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24444928 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:28.786831+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24444928 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:29.786966+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24444928 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:30.787184+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24444928 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:31.787373+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24436736 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:32.787692+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24436736 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:33.787869+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24436736 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:34.788003+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24436736 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:35.788356+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24436736 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:36.788508+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24436736 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:37.788711+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24436736 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:38.788888+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24436736 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:39.789088+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24436736 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:40.789264+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24436736 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:41.789441+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24428544 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:42.789650+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24428544 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:43.789789+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24428544 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:44.789970+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24428544 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:45.790140+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24428544 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:46.790349+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24428544 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:47.790561+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.190612793s of 52.963741302s, submitted: 218
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24428544 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:48.790806+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24428544 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:49.790980+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216985 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24420352 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:50.791148+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24420352 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:51.791339+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24420352 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:52.791633+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24420352 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:53.791784+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24420352 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:54.791928+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216970 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24420352 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,2])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:55.792071+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24420352 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.314220428s, txc = 0x563f23f84f00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 5.443528652s
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.443528652s
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.558694839s, txc = 0x563f24856000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:56.792221+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24412160 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:57.792428+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24412160 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:58.792600+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 0.078216933s of 10.916832924s, submitted: 18
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24412160 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:59.792744+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24403968 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:00.793143+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24403968 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:01.793325+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24403968 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:02.793519+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:03.793836+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:04.793996+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216970 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:05.794152+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:06.794769+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:07.796276+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:08.796560+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:09.796701+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:10.797017+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:11.797408+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.203547955s of 13.530203819s, submitted: 52
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:12.797792+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:13.797940+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:14.798139+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216970 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:15.798428+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:16.798595+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:17.798866+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:18.799109+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:19.799349+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:20.799589+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:21.799769+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:22.800004+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:23.800201+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:24.800346+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:25.800503+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:26.800670+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:27.800855+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:28.801028+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:29.801178+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:30.801362+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:31.801588+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:32.801838+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:33.801989+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:34.802132+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:35.802340+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:36.802457+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:37.802679+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:38.802901+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:39.803094+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:40.803289+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:41.803441+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:42.803776+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:43.803937+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:44.804117+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:45.804468+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:46.804714+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:47.805025+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:48.805275+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:49.823464+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:50.823821+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:51.824332+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:52.824674+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:53.825007+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:54.825358+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:55.825599+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:56.825834+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:57.826203+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:58.826801+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:59.827021+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:00.827304+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:01.827511+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:02.827805+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:03.828019+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:04.828211+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:05.828441+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:06.828637+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:07.828865+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:08.829012+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:09.829206+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:10.829411+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:11.829689+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:12.829895+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:13.830124+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:14.830447+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:15.830605+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:16.830727+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:17.830854+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:18.831057+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:19.831328+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:20.833144+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:21.833377+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:22.833600+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:23.833798+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:24.833988+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:25.834154+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:26.834301+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:27.834507+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:28.834743+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 24231936 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:29.834990+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117374976 unmapped: 24223744 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:30.835182+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117374976 unmapped: 24223744 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:31.835419+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117374976 unmapped: 24223744 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:32.835646+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117374976 unmapped: 24223744 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:33.835823+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117374976 unmapped: 24223744 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:34.836019+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117374976 unmapped: 24223744 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:35.836204+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117374976 unmapped: 24223744 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:36.836405+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117374976 unmapped: 24223744 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:37.836578+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 24215552 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:38.836760+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 24215552 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:39.836907+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 24215552 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:40.837152+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 24215552 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:41.837327+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 24215552 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:42.837617+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 24215552 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:43.837773+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 24215552 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:44.837944+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 24215552 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:45.838184+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 24215552 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:46.838354+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:47.838541+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:48.838754+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:49.838917+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:50.839195+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:51.839403+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:52.839675+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:53.839860+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:54.840100+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:55.840324+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:56.840550+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:57.840767+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:58.840896+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:29:59.841056+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:00.841226+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:01.841537+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 24207360 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:02.841829+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 24199168 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:03.842092+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 24199168 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:04.842373+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 24199168 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:05.842594+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 24199168 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:06.842790+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 24199168 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets getting new tickets!
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:07.843293+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _finish_auth 0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:08.056721+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 24199168 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:08.843514+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 24199168 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:09.843786+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 24199168 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:10.844047+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 24199168 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:11.844341+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 24199168 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:12.844630+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 24199168 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:13.845051+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 24190976 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:14.845300+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 24190976 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:15.845460+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 24190976 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:16.845764+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 24190976 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:17.845998+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 24190976 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:18.846199+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 24190976 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:19.846408+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 24190976 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:20.846637+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:21.846844+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:22.847081+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:23.847329+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:24.847553+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:25.847745+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:26.847949+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:27.848149+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:28.848325+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:29.848531+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:30.848709+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:31.848883+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:32.849141+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:33.849342+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 24182784 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:34.849517+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:35.849749+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:36.849904+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:37.850087+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:38.850302+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:39.850608+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:40.850823+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:41.851019+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:42.851208+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:43.851426+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:44.851627+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:45.851805+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:46.851973+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:47.852138+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:48.852341+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 24174592 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:49.852543+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 24166400 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:50.852694+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 24166400 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:51.852911+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 24166400 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:52.853108+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 24166400 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:53.853284+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 24166400 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:54.853451+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 24166400 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:55.853628+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 24166400 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:56.853849+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 24166400 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:57.853988+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 24166400 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:58.854152+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 24166400 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:30:59.854350+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:00.854513+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:01.854713+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:02.854942+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:03.855112+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:04.855315+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:05.855472+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:06.855642+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:07.855864+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:08.856040+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:09.856360+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:10.856576+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:11.856806+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:12.857042+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:13.857318+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:14.857560+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:15.857714+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:16.857947+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:17.858106+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:18.858363+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:19.858533+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:20.858696+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:21.858796+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:22.859010+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:23.859177+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:24.859349+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:25.859506+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:26.859667+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:27.859810+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:28.859970+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:29.860119+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:30.860312+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:31.860488+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:32.860679+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:33.860827+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:34.860983+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:35.861178+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:36.861356+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:37.861505+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:38.861671+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:39.861879+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:40.862041+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:41.862192+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:42.862399+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:43.862755+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d89c00 session 0x563f24c50000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f262b3400
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f23e88000 session 0x563f24c50f00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25d89c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:44.862907+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:45.863051+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:46.863195+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24395776 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:47.863370+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24387584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:48.863696+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24387584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:49.863889+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24387584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:50.864051+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24387584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:51.864270+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24387584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:52.864528+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24387584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:53.864694+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24387584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:54.864852+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24387584 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:55.865042+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:56.865191+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:57.865425+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:58.865622+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:31:59.865834+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:00.866054+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:01.866277+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:02.866534+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:03.866778+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:04.866980+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:05.867205+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:06.867424+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:07.867571+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:08.867735+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24379392 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:09.867922+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:10.868135+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:11.868325+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:12.868574+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:13.868775+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:14.868972+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:15.869155+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:16.869407+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:17.869597+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:18.869776+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 24371200 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:19.869967+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:20.870293+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:21.870518+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:22.870820+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:23.871065+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:24.871304+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:25.871453+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:26.871633+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:27.871881+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:28.872055+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:29.872277+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:30.872449+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:31.872599+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 24363008 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:32.872811+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 24354816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:33.873002+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 24354816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:34.873184+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 24354816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:35.873425+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 24354816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:36.873640+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 24354816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:37.873829+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 24354816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:38.874028+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 24354816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:39.874205+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 24354816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:40.874435+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 24354816 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:41.874582+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:42.874855+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:43.875040+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:44.875277+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:45.875439+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25ea2c00 session 0x563f272b3c20
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f23e88000
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:46.875575+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:47.875702+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 ms_handle_reset con 0x563f25d58800 session 0x563f26d47a40
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: handle_auth_request added challenge on 0x563f25ea2c00
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:48.875925+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:49.876158+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:50.876364+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:51.876497+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:52.876753+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:53.876981+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:54.877171+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:55.877341+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:56.877521+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:57.877702+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 24346624 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:58.877866+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 24338432 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:32:59.878040+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 24338432 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:00.878413+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 24338432 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:01.878663+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 24338432 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:02.878966+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 24338432 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:03.879202+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 24338432 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:04.879392+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 24338432 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:05.879724+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 24338432 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:06.879953+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 24338432 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:07.880424+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 24338432 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:08.880632+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 24338432 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:09.880893+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:10.881140+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:11.881397+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:12.881634+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:13.881840+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:14.882091+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:15.882353+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:16.882572+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:17.882828+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:18.883013+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:19.883291+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:20.884673+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:21.884868+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 24330240 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:22.885084+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 24322048 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:23.885294+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 24322048 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:24.885509+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 24322048 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:25.885671+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 24322048 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:26.885869+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 24322048 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:27.886037+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 24322048 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:28.886202+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 24313856 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:29.886375+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 24313856 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:30.886685+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 24313856 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:31.886854+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 24313856 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:32.887134+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 24313856 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:33.887382+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 24313856 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:34.887596+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 24305664 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:35.887761+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 24305664 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:36.887961+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 24305664 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:37.888117+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 24305664 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:38.888334+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 24305664 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:39.888549+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 24305664 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:40.888758+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:41.888977+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:42.889282+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:43.889533+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:44.889698+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:45.889869+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:46.890038+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:47.890277+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:48.890425+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:49.890567+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:50.890805+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:51.890996+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:52.891276+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 24297472 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:53.891537+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:54.891776+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:55.892001+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:56.892194+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:57.892411+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:58.892585+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:33:59.892755+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:00.892932+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:01.893179+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:02.893487+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:03.893713+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 24289280 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:04.893873+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 24281088 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:05.894027+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 24281088 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:06.894177+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 24281088 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:07.894334+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 24281088 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:08.894508+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 24281088 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:09.894660+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:10.894798+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:11.894962+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:12.895202+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:13.895409+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:14.895616+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:15.895834+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:16.896017+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:17.896389+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:18.896542+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:19.896789+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:20.896954+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:21.897169+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:22.897407+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:23.897623+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:24.897792+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:25.898016+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:26.898162+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 24272896 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:27.898314+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:28.898483+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:29.898830+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:30.899136+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 24264704 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:31.899312+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:32.899500+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:33.899695+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:34.899909+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:35.900148+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:36.900382+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:37.900576+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 24256512 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:38.900761+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:39.901021+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:40.901334+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:41.901556+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:42.901858+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 24248320 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:43.902047+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:44.902188+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:45.902364+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:46.902541+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:47.902743+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:48.902905+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:49.903068+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:50.903345+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:51.903504+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:52.903727+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:53.903895+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:54.904187+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:55.904391+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:56.904576+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 24240128 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:57.904807+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: mgrc ms_handle_reset ms_handle_reset con 0x563f271ff800
Dec 05 10:38:48 compute-0 ceph-osd[82677]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/17115915
Dec 05 10:38:48 compute-0 ceph-osd[82677]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/17115915,v1:192.168.122.100:6801/17115915]
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: get_auth_request con 0x563f284ee400 auth_method 0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: mgrc handle_mgr_configure stats_period=5
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:58.904951+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 24158208 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:34:59.905097+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:00.905293+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:01.905455+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:02.905691+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:03.906312+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:04.906523+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:05.906985+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:06.907178+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.9 total, 600.0 interval
                                           Cumulative writes: 13K writes, 45K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 4040 syncs, 3.28 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 723 writes, 1146 keys, 723 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s
                                           Interval WAL: 723 writes, 359 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:07.907356+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:08.908408+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:09.908585+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:10.908786+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:11.908949+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:12.909141+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:13.909301+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:15.500489+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 24150016 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:16.500626+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:17.500787+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:18.500942+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:19.501088+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:20.501253+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:21.501413+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:22.501626+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:23.501933+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:24.502077+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:25.502254+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:26.502426+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:27.502602+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:28.502803+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:29.502952+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:30.503171+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:31.503336+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:32.503476+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:33.503667+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:34.503796+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:35.503938+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:36.504084+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:37.504318+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:38.504444+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:39.504615+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:40.504766+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:41.504943+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:42.505134+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:43.505428+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 24141824 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:44.505575+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:45.505732+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:46.505938+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:47.506089+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:48.506252+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:49.506452+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:50.506679+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:51.506831+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:52.507035+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:53.507176+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:54.507370+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:55.507544+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:56.507695+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:57.507818+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:58.507981+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:35:59.508164+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:00.508315+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:01.508488+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 24133632 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:02.508627+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117473280 unmapped: 24125440 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:03.508836+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:04.509091+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:05.509289+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:06.509596+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:07.509761+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:08.509956+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:09.510346+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:10.510491+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:11.510689+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:12.510874+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:13.511341+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:14.511543+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:15.511721+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:16.511897+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:17.512177+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:18.512352+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:19.512519+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:20.512691+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117481472 unmapped: 24117248 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:21.512871+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117489664 unmapped: 24109056 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:22.513065+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117489664 unmapped: 24109056 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:23.513327+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117489664 unmapped: 24109056 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:24.513482+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117489664 unmapped: 24109056 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:25.513665+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117489664 unmapped: 24109056 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:26.513823+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117489664 unmapped: 24109056 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28072 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:27.514182+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:28.514401+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:29.514641+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:30.514895+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:31.515048+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:32.515360+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:33.515651+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:34.515828+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:35.516010+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:36.516164+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:37.516317+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:38.516462+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:39.516597+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:40.516748+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:41.516881+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:42.517087+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:43.517291+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 24100864 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:44.517454+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 24092672 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:45.517647+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 24092672 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:46.518086+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 24092672 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:47.518265+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 24092672 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:48.518392+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 24092672 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:49.518543+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 24092672 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:50.518694+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 24092672 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:51.518828+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 24092672 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:52.519021+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 24084480 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:53.519321+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 24084480 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:54.519512+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 24084480 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:55.519666+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 518.382507324s of 523.205322266s, submitted: 48
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 24084480 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:56.519783+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 24059904 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:57.519934+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 24043520 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:58.520110+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 24010752 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:36:59.520337+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 23945216 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:00.520605+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 23912448 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:01.520775+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 23912448 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:02.520948+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 23912448 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:03.521164+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 23904256 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:04.521332+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 23904256 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:05.521503+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 23904256 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:06.521716+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 23904256 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:07.521887+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 23904256 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:08.522047+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 23896064 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:09.522272+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 23896064 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:10.522594+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 23896064 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:11.522800+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 23887872 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:12.523038+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 23887872 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:13.523276+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 23887872 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:14.523422+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 23887872 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:15.582526+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 23887872 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:16.582936+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 23879680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:17.583151+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 23879680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:18.583321+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 23879680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:19.583518+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 23879680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:20.583993+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 23879680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:21.584199+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 23879680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:22.584521+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 23879680 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:23.584759+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:24.585187+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:25.585434+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:26.585786+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:27.586080+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:28.586350+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:29.586659+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:30.586872+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:31.587148+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:32.587436+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:33.587681+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:34.587886+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:35.588105+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:36.588362+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:37.588556+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:38.588769+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:39.589098+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:40.589410+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:41.589676+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:42.590085+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:43.590332+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:44.590616+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:45.590901+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:46.591083+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 23871488 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:47.591267+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.600116730s of 51.844303131s, submitted: 222
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 23863296 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:48.592175+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 23855104 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [1])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:49.592398+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 23797760 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:50.592567+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117809152 unmapped: 23789568 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:51.592737+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117809152 unmapped: 23789568 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:52.592931+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117809152 unmapped: 23789568 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:53.593138+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117809152 unmapped: 23789568 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:54.593317+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117809152 unmapped: 23789568 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:55.593511+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 23781376 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:56.593688+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 23781376 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:57.593846+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 23781376 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:58.594013+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 23781376 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:37:59.594213+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 23781376 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:00.596661+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 23781376 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:01.596963+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:02.597316+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:03.597551+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:04.597708+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:05.597870+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:06.598045+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:07.598292+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:08.598567+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:09.598807+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:10.599030+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:11.599253+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:12.599503+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:13.599704+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:14.599873+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 23773184 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:15.600038+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: osd.1 139 heartbeat osd_stat(store_statfs(0x4facc5000/0x0/0x4ffc00000, data 0x10ef5c8/0x11a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'config diff' '{prefix=config diff}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'config show' '{prefix=config show}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 24051712 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:16.600207+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 23740416 heap: 141598720 old mem: 2845415832 new mem: 2845415832
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: tick
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_tickets
Dec 05 10:38:48 compute-0 ceph-osd[82677]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:38:17.600322+0000)
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 10:38:48 compute-0 ceph-osd[82677]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 10:38:48 compute-0 ceph-osd[82677]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216898 data_alloc: 218103808 data_used: 7499776
Dec 05 10:38:48 compute-0 ceph-osd[82677]: do_command 'log dump' '{prefix=log dump}'
Dec 05 10:38:48 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17859 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:48 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 05 10:38:48 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1157927566' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:38:48 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:48.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:49 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27905 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:49 compute-0 nova_compute[257087]: 2025-12-05 10:38:49.098 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:49 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28090 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:49 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:49 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:49 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:49.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:49 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:38:49 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27914 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 10:38:49 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3153690514' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27926 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28108 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3498683530' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: from='client.27887 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: from='client.28072 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: from='client.17859 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1157927566' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: from='client.27905 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: from='client.28090 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1004540782' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: from='client.27914 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3153690514' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4202208088' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17883 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:49 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27944 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:50 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:50 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:50 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:50.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:50 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1498: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:50 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17898 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:50 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.27962 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:50 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 05 10:38:50 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3700710665' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 10:38:50 compute-0 podman[298847]: 2025-12-05 10:38:50.447754473 +0000 UTC m=+0.089326369 container health_status a8a2353aa5aec03def4d327740cc3f9c5f9554a2f83e2cb51b075b470af89ba8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:38:50 compute-0 podman[298843]: 2025-12-05 10:38:50.470146022 +0000 UTC m=+0.116163768 container health_status 67e4932f02054bc510ce93131333a5c2554ba55e20ff4aa6432a6172355e9d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec 05 10:38:50 compute-0 podman[298846]: 2025-12-05 10:38:50.47046413 +0000 UTC m=+0.116020504 container health_status 7b8db28978fcb093d7b36e8acaec30c47f3a411efb038ada8128a28a568c5f18 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 10:38:50 compute-0 crontab[298940]: (root) LIST (root)
Dec 05 10:38:50 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17916 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.27926 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.28108 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1230425791' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.17883 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.27944 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1556708586' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3324650243' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3402878605' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3041430921' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: pgmap v1498: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.17898 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3511628999' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.27962 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3700710665' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1548656452' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2286301185' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 10:38:51 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:51 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:51 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:51.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:51 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17934 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 05 10:38:51 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734139191' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 10:38:51 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17958 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:52 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:52 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:52 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:52.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 05 10:38:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3686054706' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.17916 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2861233590' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/819153872' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2707766863' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/670699547' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.17934 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3033156140' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/776787040' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2170238622' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1815646077' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/734139191' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2215518144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3930693913' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1885638522' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3679991154' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.17973 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1499: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 05 10:38:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3140778362' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 05 10:38:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2813822854' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28237 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:52 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 05 10:38:52 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1532538395' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 10:38:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:52 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:53 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:53 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28243 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:53 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:53 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:53 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:53.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:53 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28249 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:53 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28255 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 05 10:38:53 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4234080267' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 10:38:53 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28106 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:53 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28273 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:53 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:53.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:53 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 05 10:38:53 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139304668' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 10:38:54 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:54 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:54 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:54.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:54 compute-0 nova_compute[257087]: 2025-12-05 10:38:54.100 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:54 compute-0 nova_compute[257087]: 2025-12-05 10:38:54.102 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:54 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1500: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:54 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28288 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:54 compute-0 systemd[1]: Starting Hostname Service...
Dec 05 10:38:54 compute-0 systemd[1]: Started Hostname Service.
Dec 05 10:38:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 05 10:38:54 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3186156451' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28294 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.17958 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/503725699' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3686054706' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.17973 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: pgmap v1499: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3723006792' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/86722229' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/875228356' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/597886457' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3140778362' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1317873324' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1102699713' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2813822854' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3146630674' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1532538395' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2352251183' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 10:38:54 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 05 10:38:54 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1233502926' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28306 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:55 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:38:55 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:55.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:38:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 05 10:38:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2531292344' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 05 10:38:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1177153056' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28127 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28318 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-mgr-compute-0-hvnxai[74707]: ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:55] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:38:55 compute-0 ceph-mgr[74711]: [prometheus INFO cherrypy.access.140086240604416] ::ffff:192.168.122.100 - - [05/Dec/2025:10:38:55] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec 05 10:38:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:38:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 05 10:38:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529734144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 05 10:38:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767574282' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.28237 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.28243 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/676874165' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.28249 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.28255 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/4234080267' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.28106 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.28273 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2632092306' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3139304668' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: pgmap v1500: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.28288 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1755875143' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/3186156451' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.28294 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/796034764' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1233502926' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.28306 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1000695565' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2531292344' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1177153056' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.28127 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.28318 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1241107296' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2538874290' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/529734144' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1767574282' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:38:55 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:38:55 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28148 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:56 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:56 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:56 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:56.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 05 10:38:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2830212978' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1501: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:56 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.18102 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28163 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 05 10:38:56 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/642934218' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.18123 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28178 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='client.28148 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2830212978' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: pgmap v1501: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='client.18102 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='client.28163 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4247590937' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/642934218' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:38:56 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.18135 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.18141 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.18144 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:57 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:57 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 05 10:38:57 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:57.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 05 10:38:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.18168 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28408 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.18165 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28244 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:57.601Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:57 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 05 10:38:57 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='client.18123 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='client.28178 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='client.18135 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3147192488' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='client.18141 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2290865913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.10:0/2290865913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2184983120' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='client.18144 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='client.18168 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1992322549' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mon[74418]: from='mgr.14703 192.168.122.100:0/217606210' entity='mgr.compute-0.hvnxai' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.18177 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:57 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28268 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:38:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:57 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:38:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:38:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:38:58 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:38:58 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:58 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:38:58 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:38:58.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:38:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:38:58 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1502: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:38:58 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28280 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 05 10:38:58 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1815648975' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.18225 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.28408 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.18165 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.28244 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3129587637' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.18177 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1392088325' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.28268 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:38:58 compute-0 ceph-mon[74418]: pgmap v1502: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.28280 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2213448725' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.28292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1815648975' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3261961434' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:38:58 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:38:58 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-alertmanager-compute-0[106391]: ts=2025-12-05T10:38:58.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 05 10:38:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec 05 10:38:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2533464369' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 10:38:59 compute-0 nova_compute[257087]: 2025-12-05 10:38:59.158 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:38:59 compute-0 nova_compute[257087]: 2025-12-05 10:38:59.161 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:38:59 compute-0 nova_compute[257087]: 2025-12-05 10:38:59.161 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5059 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:38:59 compute-0 nova_compute[257087]: 2025-12-05 10:38:59.161 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:38:59 compute-0 nova_compute[257087]: 2025-12-05 10:38:59.163 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:59 compute-0 nova_compute[257087]: 2025-12-05 10:38:59.163 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:38:59 compute-0 nova_compute[257087]: 2025-12-05 10:38:59.165 257094 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:38:59 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:38:59 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:38:59 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:38:59.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:38:59 compute-0 sudo[300038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 10:38:59 compute-0 sudo[300038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 10:38:59 compute-0 sudo[300038]: pam_unix(sudo:session): session closed for user root
Dec 05 10:38:59 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 05 10:38:59 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737615367' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:38:59 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.18246 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:38:59 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28480 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:00 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 05 10:39:00 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870856883' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 10:39:00 compute-0 ceph-mon[74418]: from='client.18225 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:39:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2533464369' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 10:39:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3410462981' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 05 10:39:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3122332547' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 05 10:39:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/737615367' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 10:39:00 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2870856883' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 10:39:00 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:39:00 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 05 10:39:00 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:39:00.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 05 10:39:00 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:39:00 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:39:00 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1503: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:39:00 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:39:00 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='client.18246 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='client.28480 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/538239667' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:39:01 compute-0 ceph-mon[74418]: pgmap v1503: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/1461653996' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/3299279308' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/1170690061' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:39:01 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3919610831' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 05 10:39:01 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:39:01 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:39:01 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:39:01.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:39:01 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28525 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:01 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28439 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:02 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.18330 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:02 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:39:02 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:39:02 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.102 - anonymous [05/Dec/2025:10:39:02.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:39:02 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 10:39:02 compute-0 ceph-mon[74418]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 10:39:02 compute-0 ceph-mon[74418]: from='client.28525 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2090460409' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 05 10:39:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/1556103279' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 05 10:39:02 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/2269811736' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 05 10:39:02 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28546 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:02 compute-0 ceph-mgr[74711]: log_channel(cluster) log [DBG] : pgmap v1504: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:39:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:39:02 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 05 10:39:02 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2167838240' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 10:39:02 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28555 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:39:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 05 10:39:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:39:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 05 10:39:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:39:02 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 05 10:39:03 compute-0 ceph-3c63ce0f-5206-59ae-8381-b67d0b6424b5-nfs-cephfs-2-0-compute-0-hocvro[265676]: 05/12/2025 10:39:03 : epoch 6932b0c3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 05 10:39:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec 05 10:39:03 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2842013497' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 10:39:03 compute-0 ceph-mon[74418]: from='client.28439 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:03 compute-0 ceph-mon[74418]: from='client.18330 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:03 compute-0 ceph-mon[74418]: from='client.28546 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:03 compute-0 ceph-mon[74418]: pgmap v1504: 353 pgs: 353 active+clean; 41 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 05 10:39:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/2537804355' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 05 10:39:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2167838240' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 10:39:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.101:0/4209237256' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 05 10:39:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.102:0/3556594737' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 05 10:39:03 compute-0 ceph-mon[74418]: from='client.? 192.168.122.100:0/2842013497' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 10:39:03 compute-0 radosgw[95374]: ====== starting new request req=0x7f134ae1d5d0 =====
Dec 05 10:39:03 compute-0 radosgw[95374]: ====== req done req=0x7f134ae1d5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 05 10:39:03 compute-0 radosgw[95374]: beast: 0x7f134ae1d5d0: 192.168.122.100 - anonymous [05/Dec/2025:10:39:03.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 05 10:39:03 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28469 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:39:03 compute-0 ceph-mon[74418]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 05 10:39:03 compute-0 ceph-mon[74418]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2331635692' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 05 10:39:03 compute-0 ceph-mgr[74711]: log_channel(audit) log [DBG] : from='client.28573 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
